00:00:00.001 Started by upstream project "autotest-per-patch" build number 132550 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.091 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.092 The recommended git tool is: git 00:00:00.092 using credential 00000000-0000-0000-0000-000000000002 00:00:00.094 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.152 Fetching changes from the remote Git repository 00:00:00.155 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.217 Using shallow fetch with depth 1 00:00:00.217 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.217 > git --version # timeout=10 00:00:00.270 > git --version # 'git version 2.39.2' 00:00:00.270 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.311 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.311 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.024 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.037 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.049 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.049 > git config core.sparsecheckout # timeout=10 00:00:06.061 > git read-tree -mu HEAD # timeout=10 00:00:06.078 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.102 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.102 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.197 [Pipeline] Start of Pipeline 00:00:06.209 [Pipeline] library 00:00:06.211 Loading library shm_lib@master 00:00:06.211 Library shm_lib@master is cached. Copying from home. 00:00:06.224 [Pipeline] node 00:00:21.225 Still waiting to schedule task 00:00:21.226 Waiting for next available executor on ‘vagrant-vm-host’ 00:00:47.288 Running on VM-host-SM38 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:47.290 [Pipeline] { 00:00:47.300 [Pipeline] catchError 00:00:47.302 [Pipeline] { 00:00:47.316 [Pipeline] wrap 00:00:47.325 [Pipeline] { 00:00:47.334 [Pipeline] stage 00:00:47.336 [Pipeline] { (Prologue) 00:00:47.357 [Pipeline] echo 00:00:47.359 Node: VM-host-SM38 00:00:47.366 [Pipeline] cleanWs 00:00:47.400 [WS-CLEANUP] Deleting project workspace... 00:00:47.400 [WS-CLEANUP] Deferred wipeout is used... 00:00:47.434 [WS-CLEANUP] done 00:00:47.645 [Pipeline] setCustomBuildProperty 00:00:47.738 [Pipeline] httpRequest 00:00:48.213 [Pipeline] echo 00:00:48.214 Sorcerer 10.211.164.20 is alive 00:00:48.225 [Pipeline] retry 00:00:48.227 [Pipeline] { 00:00:48.242 [Pipeline] httpRequest 00:00:48.248 HttpMethod: GET 00:00:48.249 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:48.249 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:48.261 Response Code: HTTP/1.1 200 OK 00:00:48.262 Success: Status code 200 is in the accepted range: 200,404 00:00:48.263 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:52.462 [Pipeline] } 00:00:52.479 [Pipeline] // retry 00:00:52.486 [Pipeline] sh 00:00:52.852 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:52.872 [Pipeline] httpRequest 00:00:53.331 [Pipeline] echo 00:00:53.333 Sorcerer 10.211.164.20 is alive 00:00:53.343 [Pipeline] retry 00:00:53.346 [Pipeline] { 00:00:53.361 [Pipeline] httpRequest 00:00:53.366 HttpMethod: GET 00:00:53.367 URL: http://10.211.164.20/packages/spdk_fc308e3c5534f0ccdab5f7e3553a7b8a3948fc16.tar.gz 00:00:53.368 Sending request to url: http://10.211.164.20/packages/spdk_fc308e3c5534f0ccdab5f7e3553a7b8a3948fc16.tar.gz 00:00:53.376 Response Code: HTTP/1.1 200 OK 00:00:53.377 Success: Status code 200 is in the accepted range: 200,404 00:00:53.377 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_fc308e3c5534f0ccdab5f7e3553a7b8a3948fc16.tar.gz 00:01:14.920 [Pipeline] } 00:01:14.940 [Pipeline] // retry 00:01:14.949 [Pipeline] sh 00:01:15.231 + tar --no-same-owner -xf spdk_fc308e3c5534f0ccdab5f7e3553a7b8a3948fc16.tar.gz 00:01:18.521 [Pipeline] sh 00:01:18.798 + git -C spdk log --oneline -n5 00:01:18.799 fc308e3c5 accel: Fix comments for spdk_accel_*_dif_verify_copy() 00:01:18.799 e43b3b914 bdev: Clean up duplicated asserts in bdev_io_pull_data() 00:01:18.799 752c08b51 bdev: Rename _bdev_memory_domain_io_get_buf() to bdev_io_get_bounce_buf() 00:01:18.799 22fe262e0 bdev: Relocate _bdev_memory_domain_io_get_buf_cb() close to _bdev_io_submit_ext() 00:01:18.799 3c6c4e019 bdev: Factor out checking bounce buffer necessity into helper function 00:01:18.817 [Pipeline] writeFile 00:01:18.832 [Pipeline] sh 00:01:19.114 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:19.126 [Pipeline] sh 00:01:19.405 + cat autorun-spdk.conf 00:01:19.406 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.406 SPDK_TEST_NVMF=1 00:01:19.406 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:19.406 SPDK_TEST_URING=1 00:01:19.406 SPDK_TEST_USDT=1 00:01:19.406 SPDK_RUN_UBSAN=1 00:01:19.406 NET_TYPE=virt 00:01:19.406 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:19.412 RUN_NIGHTLY=0 00:01:19.414 [Pipeline] } 00:01:19.428 [Pipeline] // stage 00:01:19.443 [Pipeline] stage 00:01:19.446 [Pipeline] { (Run VM) 00:01:19.459 [Pipeline] sh 00:01:19.735 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:19.735 + echo 'Start stage prepare_nvme.sh' 00:01:19.735 Start stage prepare_nvme.sh 00:01:19.735 + [[ -n 3 ]] 00:01:19.735 + disk_prefix=ex3 00:01:19.735 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:19.735 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:19.735 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:19.735 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.735 ++ SPDK_TEST_NVMF=1 00:01:19.735 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:19.735 ++ SPDK_TEST_URING=1 00:01:19.735 ++ SPDK_TEST_USDT=1 00:01:19.735 ++ SPDK_RUN_UBSAN=1 00:01:19.735 ++ NET_TYPE=virt 00:01:19.735 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:19.735 ++ RUN_NIGHTLY=0 00:01:19.735 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:19.735 + nvme_files=() 00:01:19.735 + declare -A nvme_files 00:01:19.735 + backend_dir=/var/lib/libvirt/images/backends 00:01:19.735 + nvme_files['nvme.img']=5G 00:01:19.735 + nvme_files['nvme-cmb.img']=5G 00:01:19.735 + nvme_files['nvme-multi0.img']=4G 00:01:19.735 + nvme_files['nvme-multi1.img']=4G 00:01:19.735 + nvme_files['nvme-multi2.img']=4G 00:01:19.735 + nvme_files['nvme-openstack.img']=8G 00:01:19.735 + nvme_files['nvme-zns.img']=5G 00:01:19.735 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:19.735 + (( SPDK_TEST_FTL == 1 )) 00:01:19.735 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:19.735 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:19.735 + for nvme in "${!nvme_files[@]}" 00:01:19.735 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:01:19.735 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:19.735 + for nvme in "${!nvme_files[@]}" 00:01:19.735 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:01:19.735 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:19.735 + for nvme in "${!nvme_files[@]}" 00:01:19.735 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:01:19.735 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:19.735 + for nvme in "${!nvme_files[@]}" 00:01:19.735 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:01:19.735 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:19.735 + for nvme in "${!nvme_files[@]}" 00:01:19.735 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:01:19.991 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:19.991 + for nvme in "${!nvme_files[@]}" 00:01:19.991 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:01:19.991 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:19.991 + for nvme in "${!nvme_files[@]}" 00:01:19.991 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:01:19.991 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:19.991 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:01:19.991 + echo 'End stage prepare_nvme.sh' 00:01:19.991 End stage prepare_nvme.sh 00:01:20.001 [Pipeline] sh 00:01:20.276 + DISTRO=fedora39 00:01:20.276 + CPUS=10 00:01:20.276 + RAM=12288 00:01:20.276 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:20.276 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -H -a -v -f fedora39 00:01:20.276 00:01:20.276 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:20.276 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:20.276 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:20.276 HELP=0 00:01:20.276 DRY_RUN=0 00:01:20.276 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img, 00:01:20.276 NVME_DISKS_TYPE=nvme,nvme, 00:01:20.276 NVME_AUTO_CREATE=0 00:01:20.276 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img, 00:01:20.276 NVME_CMB=,, 00:01:20.276 NVME_PMR=,, 00:01:20.276 NVME_ZNS=,, 00:01:20.276 NVME_MS=,, 00:01:20.276 NVME_FDP=,, 00:01:20.276 SPDK_VAGRANT_DISTRO=fedora39 00:01:20.276 SPDK_VAGRANT_VMCPU=10 00:01:20.276 SPDK_VAGRANT_VMRAM=12288 00:01:20.276 SPDK_VAGRANT_PROVIDER=libvirt 00:01:20.276 SPDK_VAGRANT_HTTP_PROXY= 00:01:20.276 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:20.276 SPDK_OPENSTACK_NETWORK=0 00:01:20.276 VAGRANT_PACKAGE_BOX=0 00:01:20.276 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:20.276 FORCE_DISTRO=true 00:01:20.276 VAGRANT_BOX_VERSION= 00:01:20.276 EXTRA_VAGRANTFILES= 00:01:20.276 NIC_MODEL=e1000 00:01:20.276 00:01:20.276 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:01:20.276 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:22.801 Bringing machine 'default' up with 'libvirt' provider... 00:01:23.367 ==> default: Creating image (snapshot of base box volume). 00:01:23.367 ==> default: Creating domain with the following settings... 00:01:23.367 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732649718_a499015441486229d941 00:01:23.367 ==> default: -- Domain type: kvm 00:01:23.367 ==> default: -- Cpus: 10 00:01:23.367 ==> default: -- Feature: acpi 00:01:23.367 ==> default: -- Feature: apic 00:01:23.367 ==> default: -- Feature: pae 00:01:23.367 ==> default: -- Memory: 12288M 00:01:23.367 ==> default: -- Memory Backing: hugepages: 00:01:23.367 ==> default: -- Management MAC: 00:01:23.367 ==> default: -- Loader: 00:01:23.367 ==> default: -- Nvram: 00:01:23.367 ==> default: -- Base box: spdk/fedora39 00:01:23.367 ==> default: -- Storage pool: default 00:01:23.367 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732649718_a499015441486229d941.img (20G) 00:01:23.367 ==> default: -- Volume Cache: default 00:01:23.367 ==> default: -- Kernel: 00:01:23.367 ==> default: -- Initrd: 00:01:23.367 ==> default: -- Graphics Type: vnc 00:01:23.367 ==> default: -- Graphics Port: -1 00:01:23.367 ==> default: -- Graphics IP: 127.0.0.1 00:01:23.367 ==> default: -- Graphics Password: Not defined 00:01:23.367 ==> default: -- Video Type: cirrus 00:01:23.367 ==> default: -- Video VRAM: 9216 00:01:23.367 ==> default: -- Sound Type: 00:01:23.367 ==> default: -- Keymap: en-us 00:01:23.367 ==> default: -- TPM Path: 00:01:23.367 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:23.367 ==> default: -- Command line args: 00:01:23.367 ==> default: -> value=-device, 00:01:23.367 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:23.367 ==> default: -> value=-drive, 00:01:23.367 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-0-drive0, 00:01:23.367 ==> default: -> value=-device, 00:01:23.367 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:23.367 ==> default: -> value=-device, 00:01:23.367 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:23.367 ==> default: -> value=-drive, 00:01:23.367 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:23.367 ==> default: -> value=-device, 00:01:23.367 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:23.367 ==> default: -> value=-drive, 00:01:23.368 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:23.368 ==> default: -> value=-device, 00:01:23.368 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:23.368 ==> default: -> value=-drive, 00:01:23.368 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:23.368 ==> default: -> value=-device, 00:01:23.368 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:23.625 ==> default: Creating shared folders metadata... 00:01:23.625 ==> default: Starting domain. 00:01:25.527 ==> default: Waiting for domain to get an IP address... 00:01:43.618 ==> default: Waiting for SSH to become available... 00:01:43.618 ==> default: Configuring and enabling network interfaces... 00:01:46.141 default: SSH address: 192.168.121.54:22 00:01:46.141 default: SSH username: vagrant 00:01:46.141 default: SSH auth method: private key 00:01:48.037 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:54.697 ==> default: Mounting SSHFS shared folder... 00:01:55.632 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:55.632 ==> default: Checking Mount.. 00:01:57.004 ==> default: Folder Successfully Mounted! 00:01:57.004 00:01:57.004 SUCCESS! 00:01:57.004 00:01:57.004 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:57.004 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:57.004 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:57.004 00:01:57.013 [Pipeline] } 00:01:57.028 [Pipeline] // stage 00:01:57.036 [Pipeline] dir 00:01:57.037 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:01:57.038 [Pipeline] { 00:01:57.049 [Pipeline] catchError 00:01:57.051 [Pipeline] { 00:01:57.063 [Pipeline] sh 00:01:57.338 + vagrant ssh-config --host vagrant 00:01:57.338 + sed -ne '/^Host/,$p' 00:01:57.338 + tee ssh_conf 00:01:59.864 Host vagrant 00:01:59.864 HostName 192.168.121.54 00:01:59.864 User vagrant 00:01:59.864 Port 22 00:01:59.864 UserKnownHostsFile /dev/null 00:01:59.864 StrictHostKeyChecking no 00:01:59.864 PasswordAuthentication no 00:01:59.864 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:59.864 IdentitiesOnly yes 00:01:59.864 LogLevel FATAL 00:01:59.864 ForwardAgent yes 00:01:59.864 ForwardX11 yes 00:01:59.864 00:01:59.875 [Pipeline] withEnv 00:01:59.877 [Pipeline] { 00:01:59.891 [Pipeline] sh 00:02:00.169 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:00.169 source /etc/os-release 00:02:00.169 [[ -e /image.version ]] && img=$(< /image.version) 00:02:00.169 # Minimal, systemd-like check. 00:02:00.169 if [[ -e /.dockerenv ]]; then 00:02:00.169 # Clear garbage from the node'\''s name: 00:02:00.169 # agt-er_autotest_547-896 -> autotest_547-896 00:02:00.169 # $HOSTNAME is the actual container id 00:02:00.169 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:00.169 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:00.169 # We can assume this is a mount from a host where container is running, 00:02:00.169 # so fetch its hostname to easily identify the target swarm worker. 00:02:00.169 container="$(< /etc/hostname) ($agent)" 00:02:00.169 else 00:02:00.169 # Fallback 00:02:00.169 container=$agent 00:02:00.169 fi 00:02:00.169 fi 00:02:00.169 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:00.169 ' 00:02:00.178 [Pipeline] } 00:02:00.196 [Pipeline] // withEnv 00:02:00.204 [Pipeline] setCustomBuildProperty 00:02:00.217 [Pipeline] stage 00:02:00.219 [Pipeline] { (Tests) 00:02:00.236 [Pipeline] sh 00:02:00.511 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:00.524 [Pipeline] sh 00:02:00.801 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:01.070 [Pipeline] timeout 00:02:01.071 Timeout set to expire in 1 hr 0 min 00:02:01.073 [Pipeline] { 00:02:01.087 [Pipeline] sh 00:02:01.364 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:01.622 HEAD is now at fc308e3c5 accel: Fix comments for spdk_accel_*_dif_verify_copy() 00:02:01.634 [Pipeline] sh 00:02:01.912 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:01.923 [Pipeline] sh 00:02:02.201 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:02.217 [Pipeline] sh 00:02:02.494 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo' 00:02:02.494 ++ readlink -f spdk_repo 00:02:02.494 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:02.494 + [[ -n /home/vagrant/spdk_repo ]] 00:02:02.494 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:02.494 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:02.494 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:02.494 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:02.494 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:02.494 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:02.494 + cd /home/vagrant/spdk_repo 00:02:02.494 + source /etc/os-release 00:02:02.494 ++ NAME='Fedora Linux' 00:02:02.494 ++ VERSION='39 (Cloud Edition)' 00:02:02.494 ++ ID=fedora 00:02:02.495 ++ VERSION_ID=39 00:02:02.495 ++ VERSION_CODENAME= 00:02:02.495 ++ PLATFORM_ID=platform:f39 00:02:02.495 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:02.495 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:02.495 ++ LOGO=fedora-logo-icon 00:02:02.495 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:02.495 ++ HOME_URL=https://fedoraproject.org/ 00:02:02.495 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:02.495 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:02.495 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:02.495 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:02.495 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:02.495 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:02.495 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:02.495 ++ SUPPORT_END=2024-11-12 00:02:02.495 ++ VARIANT='Cloud Edition' 00:02:02.495 ++ VARIANT_ID=cloud 00:02:02.495 + uname -a 00:02:02.495 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:02.495 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:03.060 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:03.060 Hugepages 00:02:03.060 node hugesize free / total 00:02:03.060 node0 1048576kB 0 / 0 00:02:03.060 node0 2048kB 0 / 0 00:02:03.060 00:02:03.060 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:03.060 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:03.060 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:03.060 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:03.060 + rm -f /tmp/spdk-ld-path 00:02:03.060 + source autorun-spdk.conf 00:02:03.060 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:03.060 ++ SPDK_TEST_NVMF=1 00:02:03.060 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:03.060 ++ SPDK_TEST_URING=1 00:02:03.060 ++ SPDK_TEST_USDT=1 00:02:03.060 ++ SPDK_RUN_UBSAN=1 00:02:03.060 ++ NET_TYPE=virt 00:02:03.060 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:03.060 ++ RUN_NIGHTLY=0 00:02:03.060 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:03.060 + [[ -n '' ]] 00:02:03.060 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:03.060 + for M in /var/spdk/build-*-manifest.txt 00:02:03.060 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:03.060 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:03.060 + for M in /var/spdk/build-*-manifest.txt 00:02:03.060 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:03.060 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:03.060 + for M in /var/spdk/build-*-manifest.txt 00:02:03.060 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:03.060 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:03.060 ++ uname 00:02:03.060 + [[ Linux == \L\i\n\u\x ]] 00:02:03.060 + sudo dmesg -T 00:02:03.060 + sudo dmesg --clear 00:02:03.060 + dmesg_pid=4993 00:02:03.060 + [[ Fedora Linux == FreeBSD ]] 00:02:03.060 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:03.060 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:03.060 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:03.060 + [[ -x /usr/src/fio-static/fio ]] 00:02:03.060 + sudo dmesg -Tw 00:02:03.060 + export FIO_BIN=/usr/src/fio-static/fio 00:02:03.060 + FIO_BIN=/usr/src/fio-static/fio 00:02:03.060 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:03.060 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:03.060 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:03.060 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:03.060 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:03.060 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:03.060 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:03.060 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:03.060 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:03.318 19:35:58 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:03.318 19:35:58 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:03.318 19:35:58 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:03.318 19:35:58 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:03.318 19:35:58 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:03.318 19:35:58 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:02:03.318 19:35:58 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_USDT=1 00:02:03.318 19:35:58 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:03.318 19:35:58 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:02:03.318 19:35:58 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:03.318 19:35:58 -- spdk_repo/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:02:03.318 19:35:58 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:03.318 19:35:58 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:03.318 19:35:58 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:03.318 19:35:58 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:03.318 19:35:58 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:03.318 19:35:58 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:03.318 19:35:58 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:03.318 19:35:58 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:03.318 19:35:58 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:03.318 19:35:58 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:03.318 19:35:58 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:03.318 19:35:58 -- paths/export.sh@5 -- $ export PATH 00:02:03.318 19:35:58 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:03.318 19:35:58 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:03.318 19:35:58 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:03.318 19:35:58 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732649758.XXXXXX 00:02:03.318 19:35:58 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732649758.L4jZfp 00:02:03.318 19:35:58 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:03.318 19:35:58 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:03.318 19:35:58 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:03.318 19:35:58 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:03.318 19:35:58 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:03.318 19:35:58 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:03.318 19:35:58 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:03.318 19:35:58 -- common/autotest_common.sh@10 -- $ set +x 00:02:03.318 19:35:58 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:02:03.318 19:35:58 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:03.318 19:35:58 -- pm/common@17 -- $ local monitor 00:02:03.318 19:35:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:03.318 19:35:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:03.318 19:35:58 -- pm/common@25 -- $ sleep 1 00:02:03.318 19:35:58 -- pm/common@21 -- $ date +%s 00:02:03.318 19:35:58 -- pm/common@21 -- $ date +%s 00:02:03.318 19:35:58 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732649758 00:02:03.318 19:35:58 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732649758 00:02:03.318 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732649758_collect-cpu-load.pm.log 00:02:03.318 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732649758_collect-vmstat.pm.log 00:02:04.251 19:35:59 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:04.251 19:35:59 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:04.251 19:35:59 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:04.251 19:35:59 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:04.251 19:35:59 -- spdk/autobuild.sh@16 -- $ date -u 00:02:04.251 Tue Nov 26 07:35:59 PM UTC 2024 00:02:04.251 19:35:59 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:04.251 v25.01-pre-250-gfc308e3c5 00:02:04.251 19:35:59 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:04.251 19:35:59 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:04.251 19:35:59 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:04.251 19:35:59 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:04.251 19:35:59 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:04.251 19:35:59 -- common/autotest_common.sh@10 -- $ set +x 00:02:04.251 ************************************ 00:02:04.251 START TEST ubsan 00:02:04.251 ************************************ 00:02:04.251 using ubsan 00:02:04.251 19:35:59 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:04.251 00:02:04.251 real 0m0.000s 00:02:04.251 user 0m0.000s 00:02:04.251 sys 0m0.000s 00:02:04.251 19:35:59 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:04.251 19:35:59 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:04.251 ************************************ 00:02:04.251 END TEST ubsan 00:02:04.251 ************************************ 00:02:04.251 19:35:59 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:04.251 19:35:59 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:04.251 19:35:59 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:04.251 19:35:59 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:04.251 19:35:59 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:04.251 19:35:59 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:04.251 19:35:59 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:04.251 19:35:59 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:04.251 19:35:59 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:02:04.507 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:04.507 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:04.764 Using 'verbs' RDMA provider 00:02:15.692 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:25.651 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:25.651 Creating mk/config.mk...done. 00:02:25.651 Creating mk/cc.flags.mk...done. 00:02:25.651 Type 'make' to build. 00:02:25.651 19:36:20 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:25.651 19:36:20 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:25.651 19:36:20 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:25.651 19:36:20 -- common/autotest_common.sh@10 -- $ set +x 00:02:25.651 ************************************ 00:02:25.651 START TEST make 00:02:25.651 ************************************ 00:02:25.651 19:36:20 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:25.651 make[1]: Nothing to be done for 'all'. 00:02:35.608 The Meson build system 00:02:35.608 Version: 1.5.0 00:02:35.608 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:35.608 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:35.608 Build type: native build 00:02:35.608 Program cat found: YES (/usr/bin/cat) 00:02:35.608 Project name: DPDK 00:02:35.608 Project version: 24.03.0 00:02:35.608 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:35.608 C linker for the host machine: cc ld.bfd 2.40-14 00:02:35.608 Host machine cpu family: x86_64 00:02:35.608 Host machine cpu: x86_64 00:02:35.608 Message: ## Building in Developer Mode ## 00:02:35.608 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:35.608 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:35.608 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:35.608 Program python3 found: YES (/usr/bin/python3) 00:02:35.608 Program cat found: YES (/usr/bin/cat) 00:02:35.608 Compiler for C supports arguments -march=native: YES 00:02:35.608 Checking for size of "void *" : 8 00:02:35.608 Checking for size of "void *" : 8 (cached) 00:02:35.608 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:35.608 Library m found: YES 00:02:35.608 Library numa found: YES 00:02:35.608 Has header "numaif.h" : YES 00:02:35.608 Library fdt found: NO 00:02:35.608 Library execinfo found: NO 00:02:35.608 Has header "execinfo.h" : YES 00:02:35.608 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:35.608 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:35.608 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:35.608 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:35.608 Run-time dependency openssl found: YES 3.1.1 00:02:35.608 Run-time dependency libpcap found: YES 1.10.4 00:02:35.608 Has header "pcap.h" with dependency libpcap: YES 00:02:35.608 Compiler for C supports arguments -Wcast-qual: YES 00:02:35.608 Compiler for C supports arguments -Wdeprecated: YES 00:02:35.608 Compiler for C supports arguments -Wformat: YES 00:02:35.608 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:35.608 Compiler for C supports arguments -Wformat-security: NO 00:02:35.608 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:35.608 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:35.608 Compiler for C supports arguments -Wnested-externs: YES 00:02:35.608 Compiler for C supports arguments -Wold-style-definition: YES 00:02:35.608 Compiler for C supports arguments -Wpointer-arith: YES 00:02:35.608 Compiler for C supports arguments -Wsign-compare: YES 00:02:35.608 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:35.608 Compiler for C supports arguments -Wundef: YES 00:02:35.608 Compiler for C supports arguments -Wwrite-strings: YES 00:02:35.608 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:35.608 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:35.608 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:35.608 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:35.608 Program objdump found: YES (/usr/bin/objdump) 00:02:35.608 Compiler for C supports arguments -mavx512f: YES 00:02:35.608 Checking if "AVX512 checking" compiles: YES 00:02:35.608 Fetching value of define "__SSE4_2__" : 1 00:02:35.608 Fetching value of define "__AES__" : 1 00:02:35.608 Fetching value of define "__AVX__" : 1 00:02:35.608 Fetching value of define "__AVX2__" : 1 00:02:35.608 Fetching value of define "__AVX512BW__" : 1 00:02:35.608 Fetching value of define "__AVX512CD__" : 1 00:02:35.608 Fetching value of define "__AVX512DQ__" : 1 00:02:35.608 Fetching value of define "__AVX512F__" : 1 00:02:35.608 Fetching value of define "__AVX512VL__" : 1 00:02:35.608 Fetching value of define "__PCLMUL__" : 1 00:02:35.608 Fetching value of define "__RDRND__" : 1 00:02:35.608 Fetching value of define "__RDSEED__" : 1 00:02:35.608 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:35.608 Fetching value of define "__znver1__" : (undefined) 00:02:35.608 Fetching value of define "__znver2__" : (undefined) 00:02:35.608 Fetching value of define "__znver3__" : (undefined) 00:02:35.608 Fetching value of define "__znver4__" : (undefined) 00:02:35.608 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:35.608 Message: lib/log: Defining dependency "log" 00:02:35.608 Message: lib/kvargs: Defining dependency "kvargs" 00:02:35.608 Message: lib/telemetry: Defining dependency "telemetry" 00:02:35.608 Checking for function "getentropy" : NO 00:02:35.608 Message: lib/eal: Defining dependency "eal" 00:02:35.608 Message: lib/ring: Defining dependency "ring" 00:02:35.608 Message: lib/rcu: Defining dependency "rcu" 00:02:35.608 Message: lib/mempool: Defining dependency "mempool" 00:02:35.608 Message: lib/mbuf: Defining dependency "mbuf" 00:02:35.608 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:35.608 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:35.608 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:35.608 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:35.608 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:35.608 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:35.608 Compiler for C supports arguments -mpclmul: YES 00:02:35.608 Compiler for C supports arguments -maes: YES 00:02:35.608 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:35.608 Compiler for C supports arguments -mavx512bw: YES 00:02:35.608 Compiler for C supports arguments -mavx512dq: YES 00:02:35.608 Compiler for C supports arguments -mavx512vl: YES 00:02:35.609 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:35.609 Compiler for C supports arguments -mavx2: YES 00:02:35.609 Compiler for C supports arguments -mavx: YES 00:02:35.609 Message: lib/net: Defining dependency "net" 00:02:35.609 Message: lib/meter: Defining dependency "meter" 00:02:35.609 Message: lib/ethdev: Defining dependency "ethdev" 00:02:35.609 Message: lib/pci: Defining dependency "pci" 00:02:35.609 Message: lib/cmdline: Defining dependency "cmdline" 00:02:35.609 Message: lib/hash: Defining dependency "hash" 00:02:35.609 Message: lib/timer: Defining dependency "timer" 00:02:35.609 Message: lib/compressdev: Defining dependency "compressdev" 00:02:35.609 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:35.609 Message: lib/dmadev: Defining dependency "dmadev" 00:02:35.609 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:35.609 Message: lib/power: Defining dependency "power" 00:02:35.609 Message: lib/reorder: Defining dependency "reorder" 00:02:35.609 Message: lib/security: Defining dependency "security" 00:02:35.609 Has header "linux/userfaultfd.h" : YES 00:02:35.609 Has header "linux/vduse.h" : YES 00:02:35.609 Message: lib/vhost: Defining dependency "vhost" 00:02:35.609 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:35.609 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:35.609 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:35.609 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:35.609 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:35.609 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:35.609 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:35.609 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:35.609 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:35.609 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:35.609 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:35.609 Configuring doxy-api-html.conf using configuration 00:02:35.609 Configuring doxy-api-man.conf using configuration 00:02:35.609 Program mandb found: YES (/usr/bin/mandb) 00:02:35.609 Program sphinx-build found: NO 00:02:35.609 Configuring rte_build_config.h using configuration 00:02:35.609 Message: 00:02:35.609 ================= 00:02:35.609 Applications Enabled 00:02:35.609 ================= 00:02:35.609 00:02:35.609 apps: 00:02:35.609 00:02:35.609 00:02:35.609 Message: 00:02:35.609 ================= 00:02:35.609 Libraries Enabled 00:02:35.609 ================= 00:02:35.609 00:02:35.609 libs: 00:02:35.609 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:35.609 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:35.609 cryptodev, dmadev, power, reorder, security, vhost, 00:02:35.609 00:02:35.609 Message: 00:02:35.609 =============== 00:02:35.609 Drivers Enabled 00:02:35.609 =============== 00:02:35.609 00:02:35.609 common: 00:02:35.609 00:02:35.609 bus: 00:02:35.609 pci, vdev, 00:02:35.609 mempool: 00:02:35.609 ring, 00:02:35.609 dma: 00:02:35.609 00:02:35.609 net: 00:02:35.609 00:02:35.609 crypto: 00:02:35.609 00:02:35.609 compress: 00:02:35.609 00:02:35.609 vdpa: 00:02:35.609 00:02:35.609 00:02:35.609 Message: 00:02:35.609 ================= 00:02:35.609 Content Skipped 00:02:35.609 ================= 00:02:35.609 00:02:35.609 apps: 00:02:35.609 dumpcap: explicitly disabled via build config 00:02:35.609 graph: explicitly disabled via build config 00:02:35.609 pdump: explicitly disabled via build config 00:02:35.609 proc-info: explicitly disabled via build config 00:02:35.609 test-acl: explicitly disabled via build config 00:02:35.609 test-bbdev: explicitly disabled via build config 00:02:35.609 test-cmdline: explicitly disabled via build config 00:02:35.609 test-compress-perf: explicitly disabled via build config 00:02:35.609 test-crypto-perf: explicitly disabled via build config 00:02:35.609 test-dma-perf: explicitly disabled via build config 00:02:35.609 test-eventdev: explicitly disabled via build config 00:02:35.609 test-fib: explicitly disabled via build config 00:02:35.609 test-flow-perf: explicitly disabled via build config 00:02:35.609 test-gpudev: explicitly disabled via build config 00:02:35.609 test-mldev: explicitly disabled via build config 00:02:35.609 test-pipeline: explicitly disabled via build config 00:02:35.609 test-pmd: explicitly disabled via build config 00:02:35.609 test-regex: explicitly disabled via build config 00:02:35.609 test-sad: explicitly disabled via build config 00:02:35.609 test-security-perf: explicitly disabled via build config 00:02:35.609 00:02:35.609 libs: 00:02:35.609 argparse: explicitly disabled via build config 00:02:35.609 metrics: explicitly disabled via build config 00:02:35.609 acl: explicitly disabled via build config 00:02:35.609 bbdev: explicitly disabled via build config 00:02:35.609 bitratestats: explicitly disabled via build config 00:02:35.609 bpf: explicitly disabled via build config 00:02:35.609 cfgfile: explicitly disabled via build config 00:02:35.609 distributor: explicitly disabled via build config 00:02:35.609 efd: explicitly disabled via build config 00:02:35.609 eventdev: explicitly disabled via build config 00:02:35.609 dispatcher: explicitly disabled via build config 00:02:35.609 gpudev: explicitly disabled via build config 00:02:35.609 gro: explicitly disabled via build config 00:02:35.609 gso: explicitly disabled via build config 00:02:35.609 ip_frag: explicitly disabled via build config 00:02:35.609 jobstats: explicitly disabled via build config 00:02:35.609 latencystats: explicitly disabled via build config 00:02:35.609 lpm: explicitly disabled via build config 00:02:35.609 member: explicitly disabled via build config 00:02:35.609 pcapng: explicitly disabled via build config 00:02:35.609 rawdev: explicitly disabled via build config 00:02:35.609 regexdev: explicitly disabled via build config 00:02:35.609 mldev: explicitly disabled via build config 00:02:35.609 rib: explicitly disabled via build config 00:02:35.609 sched: explicitly disabled via build config 00:02:35.609 stack: explicitly disabled via build config 00:02:35.609 ipsec: explicitly disabled via build config 00:02:35.609 pdcp: explicitly disabled via build config 00:02:35.609 fib: explicitly disabled via build config 00:02:35.609 port: explicitly disabled via build config 00:02:35.609 pdump: explicitly disabled via build config 00:02:35.609 table: explicitly disabled via build config 00:02:35.609 pipeline: explicitly disabled via build config 00:02:35.609 graph: explicitly disabled via build config 00:02:35.609 node: explicitly disabled via build config 00:02:35.609 00:02:35.609 drivers: 00:02:35.609 common/cpt: not in enabled drivers build config 00:02:35.609 common/dpaax: not in enabled drivers build config 00:02:35.609 common/iavf: not in enabled drivers build config 00:02:35.609 common/idpf: not in enabled drivers build config 00:02:35.609 common/ionic: not in enabled drivers build config 00:02:35.609 common/mvep: not in enabled drivers build config 00:02:35.609 common/octeontx: not in enabled drivers build config 00:02:35.609 bus/auxiliary: not in enabled drivers build config 00:02:35.609 bus/cdx: not in enabled drivers build config 00:02:35.609 bus/dpaa: not in enabled drivers build config 00:02:35.609 bus/fslmc: not in enabled drivers build config 00:02:35.609 bus/ifpga: not in enabled drivers build config 00:02:35.609 bus/platform: not in enabled drivers build config 00:02:35.609 bus/uacce: not in enabled drivers build config 00:02:35.609 bus/vmbus: not in enabled drivers build config 00:02:35.609 common/cnxk: not in enabled drivers build config 00:02:35.609 common/mlx5: not in enabled drivers build config 00:02:35.609 common/nfp: not in enabled drivers build config 00:02:35.609 common/nitrox: not in enabled drivers build config 00:02:35.609 common/qat: not in enabled drivers build config 00:02:35.609 common/sfc_efx: not in enabled drivers build config 00:02:35.609 mempool/bucket: not in enabled drivers build config 00:02:35.609 mempool/cnxk: not in enabled drivers build config 00:02:35.609 mempool/dpaa: not in enabled drivers build config 00:02:35.609 mempool/dpaa2: not in enabled drivers build config 00:02:35.609 mempool/octeontx: not in enabled drivers build config 00:02:35.609 mempool/stack: not in enabled drivers build config 00:02:35.609 dma/cnxk: not in enabled drivers build config 00:02:35.609 dma/dpaa: not in enabled drivers build config 00:02:35.609 dma/dpaa2: not in enabled drivers build config 00:02:35.609 dma/hisilicon: not in enabled drivers build config 00:02:35.609 dma/idxd: not in enabled drivers build config 00:02:35.609 dma/ioat: not in enabled drivers build config 00:02:35.609 dma/skeleton: not in enabled drivers build config 00:02:35.609 net/af_packet: not in enabled drivers build config 00:02:35.609 net/af_xdp: not in enabled drivers build config 00:02:35.609 net/ark: not in enabled drivers build config 00:02:35.609 net/atlantic: not in enabled drivers build config 00:02:35.609 net/avp: not in enabled drivers build config 00:02:35.609 net/axgbe: not in enabled drivers build config 00:02:35.609 net/bnx2x: not in enabled drivers build config 00:02:35.609 net/bnxt: not in enabled drivers build config 00:02:35.609 net/bonding: not in enabled drivers build config 00:02:35.609 net/cnxk: not in enabled drivers build config 00:02:35.609 net/cpfl: not in enabled drivers build config 00:02:35.609 net/cxgbe: not in enabled drivers build config 00:02:35.609 net/dpaa: not in enabled drivers build config 00:02:35.609 net/dpaa2: not in enabled drivers build config 00:02:35.609 net/e1000: not in enabled drivers build config 00:02:35.609 net/ena: not in enabled drivers build config 00:02:35.609 net/enetc: not in enabled drivers build config 00:02:35.609 net/enetfec: not in enabled drivers build config 00:02:35.609 net/enic: not in enabled drivers build config 00:02:35.609 net/failsafe: not in enabled drivers build config 00:02:35.609 net/fm10k: not in enabled drivers build config 00:02:35.609 net/gve: not in enabled drivers build config 00:02:35.609 net/hinic: not in enabled drivers build config 00:02:35.609 net/hns3: not in enabled drivers build config 00:02:35.609 net/i40e: not in enabled drivers build config 00:02:35.609 net/iavf: not in enabled drivers build config 00:02:35.609 net/ice: not in enabled drivers build config 00:02:35.609 net/idpf: not in enabled drivers build config 00:02:35.609 net/igc: not in enabled drivers build config 00:02:35.609 net/ionic: not in enabled drivers build config 00:02:35.609 net/ipn3ke: not in enabled drivers build config 00:02:35.610 net/ixgbe: not in enabled drivers build config 00:02:35.610 net/mana: not in enabled drivers build config 00:02:35.610 net/memif: not in enabled drivers build config 00:02:35.610 net/mlx4: not in enabled drivers build config 00:02:35.610 net/mlx5: not in enabled drivers build config 00:02:35.610 net/mvneta: not in enabled drivers build config 00:02:35.610 net/mvpp2: not in enabled drivers build config 00:02:35.610 net/netvsc: not in enabled drivers build config 00:02:35.610 net/nfb: not in enabled drivers build config 00:02:35.610 net/nfp: not in enabled drivers build config 00:02:35.610 net/ngbe: not in enabled drivers build config 00:02:35.610 net/null: not in enabled drivers build config 00:02:35.610 net/octeontx: not in enabled drivers build config 00:02:35.610 net/octeon_ep: not in enabled drivers build config 00:02:35.610 net/pcap: not in enabled drivers build config 00:02:35.610 net/pfe: not in enabled drivers build config 00:02:35.610 net/qede: not in enabled drivers build config 00:02:35.610 net/ring: not in enabled drivers build config 00:02:35.610 net/sfc: not in enabled drivers build config 00:02:35.610 net/softnic: not in enabled drivers build config 00:02:35.610 net/tap: not in enabled drivers build config 00:02:35.610 net/thunderx: not in enabled drivers build config 00:02:35.610 net/txgbe: not in enabled drivers build config 00:02:35.610 net/vdev_netvsc: not in enabled drivers build config 00:02:35.610 net/vhost: not in enabled drivers build config 00:02:35.610 net/virtio: not in enabled drivers build config 00:02:35.610 net/vmxnet3: not in enabled drivers build config 00:02:35.610 raw/*: missing internal dependency, "rawdev" 00:02:35.610 crypto/armv8: not in enabled drivers build config 00:02:35.610 crypto/bcmfs: not in enabled drivers build config 00:02:35.610 crypto/caam_jr: not in enabled drivers build config 00:02:35.610 crypto/ccp: not in enabled drivers build config 00:02:35.610 crypto/cnxk: not in enabled drivers build config 00:02:35.610 crypto/dpaa_sec: not in enabled drivers build config 00:02:35.610 crypto/dpaa2_sec: not in enabled drivers build config 00:02:35.610 crypto/ipsec_mb: not in enabled drivers build config 00:02:35.610 crypto/mlx5: not in enabled drivers build config 00:02:35.610 crypto/mvsam: not in enabled drivers build config 00:02:35.610 crypto/nitrox: not in enabled drivers build config 00:02:35.610 crypto/null: not in enabled drivers build config 00:02:35.610 crypto/octeontx: not in enabled drivers build config 00:02:35.610 crypto/openssl: not in enabled drivers build config 00:02:35.610 crypto/scheduler: not in enabled drivers build config 00:02:35.610 crypto/uadk: not in enabled drivers build config 00:02:35.610 crypto/virtio: not in enabled drivers build config 00:02:35.610 compress/isal: not in enabled drivers build config 00:02:35.610 compress/mlx5: not in enabled drivers build config 00:02:35.610 compress/nitrox: not in enabled drivers build config 00:02:35.610 compress/octeontx: not in enabled drivers build config 00:02:35.610 compress/zlib: not in enabled drivers build config 00:02:35.610 regex/*: missing internal dependency, "regexdev" 00:02:35.610 ml/*: missing internal dependency, "mldev" 00:02:35.610 vdpa/ifc: not in enabled drivers build config 00:02:35.610 vdpa/mlx5: not in enabled drivers build config 00:02:35.610 vdpa/nfp: not in enabled drivers build config 00:02:35.610 vdpa/sfc: not in enabled drivers build config 00:02:35.610 event/*: missing internal dependency, "eventdev" 00:02:35.610 baseband/*: missing internal dependency, "bbdev" 00:02:35.610 gpu/*: missing internal dependency, "gpudev" 00:02:35.610 00:02:35.610 00:02:36.980 Build targets in project: 84 00:02:36.980 00:02:36.980 DPDK 24.03.0 00:02:36.980 00:02:36.980 User defined options 00:02:36.980 buildtype : debug 00:02:36.980 default_library : shared 00:02:36.980 libdir : lib 00:02:36.980 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:36.980 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:36.980 c_link_args : 00:02:36.981 cpu_instruction_set: native 00:02:36.981 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:36.981 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:36.981 enable_docs : false 00:02:36.981 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:36.981 enable_kmods : false 00:02:36.981 max_lcores : 128 00:02:36.981 tests : false 00:02:36.981 00:02:36.981 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:37.237 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:37.494 [1/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:37.494 [2/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:37.494 [3/267] Linking static target lib/librte_kvargs.a 00:02:37.494 [4/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:37.494 [5/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:37.494 [6/267] Linking static target lib/librte_log.a 00:02:37.750 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:37.750 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:37.750 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:37.750 [10/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.750 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:37.750 [12/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:37.750 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:37.750 [14/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:37.750 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:37.750 [16/267] Linking static target lib/librte_telemetry.a 00:02:38.007 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:38.007 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:38.007 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:38.265 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:38.265 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:38.265 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:38.265 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:38.265 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:38.265 [25/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.265 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:38.265 [27/267] Linking target lib/librte_log.so.24.1 00:02:38.523 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:38.523 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:38.523 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:38.523 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:38.523 [32/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:38.523 [33/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:38.523 [34/267] Linking target lib/librte_kvargs.so.24.1 00:02:38.779 [35/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.780 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:38.780 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:38.780 [38/267] Linking target lib/librte_telemetry.so.24.1 00:02:38.780 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:38.780 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:38.780 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:38.780 [42/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:38.780 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:38.780 [44/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:38.780 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:39.037 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:39.037 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:39.037 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:39.037 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:39.294 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:39.294 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:39.294 [52/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:39.294 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:39.294 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:39.294 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:39.294 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:39.294 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:39.551 [58/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:39.551 [59/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:39.551 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:39.551 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:39.551 [62/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:39.551 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:39.551 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:39.551 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:39.551 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:39.808 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:39.808 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:39.808 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:39.808 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:39.808 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:39.808 [72/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:40.065 [73/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:40.065 [74/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:40.065 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:40.065 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:40.065 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:40.065 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:40.065 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:40.065 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:40.065 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:40.322 [82/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:40.322 [83/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:40.322 [84/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:40.323 [85/267] Linking static target lib/librte_eal.a 00:02:40.323 [86/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:40.323 [87/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:40.580 [88/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:40.580 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:40.580 [90/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:40.580 [91/267] Linking static target lib/librte_ring.a 00:02:40.580 [92/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:40.580 [93/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:40.837 [94/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:40.837 [95/267] Linking static target lib/librte_mempool.a 00:02:40.837 [96/267] Linking static target lib/librte_rcu.a 00:02:40.837 [97/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:40.837 [98/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:40.837 [99/267] Linking static target lib/librte_mbuf.a 00:02:40.837 [100/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:40.837 [101/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:41.095 [102/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:41.095 [103/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.095 [104/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:41.095 [105/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:41.095 [106/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.095 [107/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:41.095 [108/267] Linking static target lib/librte_net.a 00:02:41.352 [109/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:41.352 [110/267] Linking static target lib/librte_meter.a 00:02:41.352 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:41.352 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:41.352 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:41.610 [114/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.610 [115/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:41.610 [116/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.610 [117/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.867 [118/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.867 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:41.867 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:42.124 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:42.124 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:42.124 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:42.124 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:42.383 [125/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:42.383 [126/267] Linking static target lib/librte_pci.a 00:02:42.383 [127/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:42.383 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:42.383 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:42.383 [130/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:42.383 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:42.383 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:42.383 [133/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:42.641 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:42.641 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:42.641 [136/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:42.641 [137/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.641 [138/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:42.641 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:42.641 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:42.641 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:42.641 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:42.641 [143/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:42.641 [144/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:42.641 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:42.641 [146/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:42.641 [147/267] Linking static target lib/librte_cmdline.a 00:02:42.899 [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:42.899 [149/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:42.899 [150/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:42.899 [151/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:42.899 [152/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:42.899 [153/267] Linking static target lib/librte_ethdev.a 00:02:42.899 [154/267] Linking static target lib/librte_timer.a 00:02:43.157 [155/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:43.157 [156/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:43.157 [157/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:43.157 [158/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:43.157 [159/267] Linking static target lib/librte_compressdev.a 00:02:43.416 [160/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:43.416 [161/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.416 [162/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:43.416 [163/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:43.416 [164/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:43.416 [165/267] Linking static target lib/librte_dmadev.a 00:02:43.416 [166/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:43.674 [167/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:43.674 [168/267] Linking static target lib/librte_hash.a 00:02:43.674 [169/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:43.674 [170/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:43.932 [171/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:43.932 [172/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:43.932 [173/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:43.932 [174/267] Linking static target lib/librte_cryptodev.a 00:02:43.932 [175/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.932 [176/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.189 [177/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.189 [178/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:44.189 [179/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:44.189 [180/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:44.189 [181/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:44.446 [182/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:44.446 [183/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:44.446 [184/267] Linking static target lib/librte_power.a 00:02:44.446 [185/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:44.446 [186/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.446 [187/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:44.446 [188/267] Linking static target lib/librte_security.a 00:02:44.705 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:44.705 [190/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:44.705 [191/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:44.705 [192/267] Linking static target lib/librte_reorder.a 00:02:44.962 [193/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.220 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:45.220 [195/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:45.220 [196/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.220 [197/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:45.220 [198/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.476 [199/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:45.476 [200/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:45.476 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:45.476 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:45.476 [203/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:45.476 [204/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:45.476 [205/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:45.732 [206/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:45.732 [207/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:45.732 [208/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:45.732 [209/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:45.732 [210/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.732 [211/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:45.990 [212/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:45.990 [213/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:45.990 [214/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:45.990 [215/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:45.990 [216/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:45.990 [217/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:45.990 [218/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:45.990 [219/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:45.990 [220/267] Linking static target drivers/librte_bus_vdev.a 00:02:45.990 [221/267] Linking static target drivers/librte_bus_pci.a 00:02:45.990 [222/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:45.990 [223/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:45.990 [224/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:45.990 [225/267] Linking static target drivers/librte_mempool_ring.a 00:02:46.247 [226/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.247 [227/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.812 [228/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:46.812 [229/267] Linking static target lib/librte_vhost.a 00:02:47.807 [230/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.807 [231/267] Linking target lib/librte_eal.so.24.1 00:02:47.807 [232/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.807 [233/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:47.807 [234/267] Linking target lib/librte_meter.so.24.1 00:02:47.807 [235/267] Linking target lib/librte_timer.so.24.1 00:02:47.807 [236/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:47.807 [237/267] Linking target lib/librte_ring.so.24.1 00:02:47.807 [238/267] Linking target lib/librte_dmadev.so.24.1 00:02:47.807 [239/267] Linking target lib/librte_pci.so.24.1 00:02:47.807 [240/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:47.807 [241/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:47.807 [242/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:47.807 [243/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:47.807 [244/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:47.807 [245/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:47.807 [246/267] Linking target lib/librte_rcu.so.24.1 00:02:47.807 [247/267] Linking target lib/librte_mempool.so.24.1 00:02:48.064 [248/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:48.064 [249/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:48.064 [250/267] Linking target lib/librte_mbuf.so.24.1 00:02:48.064 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:48.064 [252/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:48.064 [253/267] Linking target lib/librte_reorder.so.24.1 00:02:48.064 [254/267] Linking target lib/librte_compressdev.so.24.1 00:02:48.064 [255/267] Linking target lib/librte_net.so.24.1 00:02:48.064 [256/267] Linking target lib/librte_cryptodev.so.24.1 00:02:48.320 [257/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:48.320 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:48.320 [259/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.320 [260/267] Linking target lib/librte_security.so.24.1 00:02:48.320 [261/267] Linking target lib/librte_cmdline.so.24.1 00:02:48.320 [262/267] Linking target lib/librte_hash.so.24.1 00:02:48.320 [263/267] Linking target lib/librte_ethdev.so.24.1 00:02:48.578 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:48.578 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:48.578 [266/267] Linking target lib/librte_power.so.24.1 00:02:48.578 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:48.578 INFO: autodetecting backend as ninja 00:02:48.578 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:03.465 CC lib/log/log.o 00:03:03.465 CC lib/ut/ut.o 00:03:03.465 CC lib/log/log_flags.o 00:03:03.465 CC lib/ut_mock/mock.o 00:03:03.465 CC lib/log/log_deprecated.o 00:03:03.465 LIB libspdk_ut.a 00:03:03.465 SO libspdk_ut.so.2.0 00:03:03.465 LIB libspdk_log.a 00:03:03.465 LIB libspdk_ut_mock.a 00:03:03.465 SO libspdk_log.so.7.1 00:03:03.465 SYMLINK libspdk_ut.so 00:03:03.465 SO libspdk_ut_mock.so.6.0 00:03:03.465 SYMLINK libspdk_ut_mock.so 00:03:03.465 SYMLINK libspdk_log.so 00:03:03.465 CC lib/util/base64.o 00:03:03.465 CC lib/util/cpuset.o 00:03:03.465 CC lib/util/crc16.o 00:03:03.465 CC lib/util/crc32c.o 00:03:03.465 CC lib/util/bit_array.o 00:03:03.465 CC lib/util/crc32.o 00:03:03.465 CC lib/dma/dma.o 00:03:03.465 CC lib/ioat/ioat.o 00:03:03.465 CXX lib/trace_parser/trace.o 00:03:03.465 CC lib/util/crc32_ieee.o 00:03:03.465 CC lib/util/crc64.o 00:03:03.465 CC lib/vfio_user/host/vfio_user_pci.o 00:03:03.465 CC lib/util/dif.o 00:03:03.465 CC lib/util/fd.o 00:03:03.465 LIB libspdk_dma.a 00:03:03.465 CC lib/vfio_user/host/vfio_user.o 00:03:03.465 CC lib/util/fd_group.o 00:03:03.465 CC lib/util/file.o 00:03:03.465 SO libspdk_dma.so.5.0 00:03:03.466 CC lib/util/hexlify.o 00:03:03.466 SYMLINK libspdk_dma.so 00:03:03.466 CC lib/util/iov.o 00:03:03.466 CC lib/util/math.o 00:03:03.466 LIB libspdk_ioat.a 00:03:03.466 CC lib/util/net.o 00:03:03.466 SO libspdk_ioat.so.7.0 00:03:03.466 SYMLINK libspdk_ioat.so 00:03:03.466 CC lib/util/pipe.o 00:03:03.466 CC lib/util/strerror_tls.o 00:03:03.466 CC lib/util/string.o 00:03:03.466 CC lib/util/uuid.o 00:03:03.466 CC lib/util/xor.o 00:03:03.466 CC lib/util/zipf.o 00:03:03.466 CC lib/util/md5.o 00:03:03.466 LIB libspdk_vfio_user.a 00:03:03.466 SO libspdk_vfio_user.so.5.0 00:03:03.466 SYMLINK libspdk_vfio_user.so 00:03:03.466 LIB libspdk_util.a 00:03:03.466 SO libspdk_util.so.10.1 00:03:03.723 SYMLINK libspdk_util.so 00:03:03.723 LIB libspdk_trace_parser.a 00:03:03.723 SO libspdk_trace_parser.so.6.0 00:03:03.723 CC lib/json/json_parse.o 00:03:03.723 CC lib/json/json_util.o 00:03:03.723 CC lib/json/json_write.o 00:03:03.723 CC lib/idxd/idxd.o 00:03:03.723 CC lib/idxd/idxd_user.o 00:03:03.723 CC lib/conf/conf.o 00:03:03.723 CC lib/vmd/vmd.o 00:03:03.723 CC lib/rdma_utils/rdma_utils.o 00:03:03.723 CC lib/env_dpdk/env.o 00:03:03.981 SYMLINK libspdk_trace_parser.so 00:03:03.981 CC lib/vmd/led.o 00:03:03.981 LIB libspdk_conf.a 00:03:03.981 CC lib/env_dpdk/memory.o 00:03:03.981 CC lib/idxd/idxd_kernel.o 00:03:03.981 SO libspdk_conf.so.6.0 00:03:03.981 CC lib/env_dpdk/pci.o 00:03:03.981 SYMLINK libspdk_conf.so 00:03:03.981 CC lib/env_dpdk/init.o 00:03:03.981 CC lib/env_dpdk/threads.o 00:03:03.981 LIB libspdk_rdma_utils.a 00:03:03.981 SO libspdk_rdma_utils.so.1.0 00:03:03.981 LIB libspdk_json.a 00:03:03.981 SYMLINK libspdk_rdma_utils.so 00:03:03.981 CC lib/env_dpdk/pci_ioat.o 00:03:04.239 CC lib/env_dpdk/pci_virtio.o 00:03:04.239 SO libspdk_json.so.6.0 00:03:04.239 LIB libspdk_idxd.a 00:03:04.239 SYMLINK libspdk_json.so 00:03:04.239 CC lib/env_dpdk/pci_vmd.o 00:03:04.239 SO libspdk_idxd.so.12.1 00:03:04.239 CC lib/env_dpdk/pci_idxd.o 00:03:04.239 CC lib/rdma_provider/common.o 00:03:04.239 SYMLINK libspdk_idxd.so 00:03:04.239 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:04.239 CC lib/env_dpdk/pci_event.o 00:03:04.239 CC lib/env_dpdk/sigbus_handler.o 00:03:04.239 CC lib/env_dpdk/pci_dpdk.o 00:03:04.239 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:04.239 CC lib/jsonrpc/jsonrpc_server.o 00:03:04.496 LIB libspdk_vmd.a 00:03:04.496 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:04.496 SO libspdk_vmd.so.6.0 00:03:04.496 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:04.496 CC lib/jsonrpc/jsonrpc_client.o 00:03:04.496 LIB libspdk_rdma_provider.a 00:03:04.496 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:04.496 SYMLINK libspdk_vmd.so 00:03:04.496 SO libspdk_rdma_provider.so.7.0 00:03:04.496 SYMLINK libspdk_rdma_provider.so 00:03:04.755 LIB libspdk_jsonrpc.a 00:03:04.755 SO libspdk_jsonrpc.so.6.0 00:03:04.755 SYMLINK libspdk_jsonrpc.so 00:03:05.012 LIB libspdk_env_dpdk.a 00:03:05.012 CC lib/rpc/rpc.o 00:03:05.012 SO libspdk_env_dpdk.so.15.1 00:03:05.012 SYMLINK libspdk_env_dpdk.so 00:03:05.012 LIB libspdk_rpc.a 00:03:05.270 SO libspdk_rpc.so.6.0 00:03:05.270 SYMLINK libspdk_rpc.so 00:03:05.270 CC lib/notify/notify.o 00:03:05.270 CC lib/notify/notify_rpc.o 00:03:05.270 CC lib/keyring/keyring.o 00:03:05.270 CC lib/keyring/keyring_rpc.o 00:03:05.270 CC lib/trace/trace.o 00:03:05.270 CC lib/trace/trace_flags.o 00:03:05.270 CC lib/trace/trace_rpc.o 00:03:05.529 LIB libspdk_keyring.a 00:03:05.529 LIB libspdk_trace.a 00:03:05.529 LIB libspdk_notify.a 00:03:05.529 SO libspdk_keyring.so.2.0 00:03:05.529 SO libspdk_trace.so.11.0 00:03:05.529 SO libspdk_notify.so.6.0 00:03:05.529 SYMLINK libspdk_keyring.so 00:03:05.529 SYMLINK libspdk_notify.so 00:03:05.529 SYMLINK libspdk_trace.so 00:03:05.785 CC lib/thread/thread.o 00:03:05.785 CC lib/thread/iobuf.o 00:03:05.785 CC lib/sock/sock.o 00:03:05.785 CC lib/sock/sock_rpc.o 00:03:06.043 LIB libspdk_sock.a 00:03:06.300 SO libspdk_sock.so.10.0 00:03:06.300 SYMLINK libspdk_sock.so 00:03:06.588 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:06.588 CC lib/nvme/nvme_fabric.o 00:03:06.588 CC lib/nvme/nvme_ns.o 00:03:06.588 CC lib/nvme/nvme_ctrlr.o 00:03:06.588 CC lib/nvme/nvme_pcie.o 00:03:06.588 CC lib/nvme/nvme_ns_cmd.o 00:03:06.588 CC lib/nvme/nvme_pcie_common.o 00:03:06.588 CC lib/nvme/nvme.o 00:03:06.588 CC lib/nvme/nvme_qpair.o 00:03:06.846 LIB libspdk_thread.a 00:03:06.846 SO libspdk_thread.so.11.0 00:03:06.846 CC lib/nvme/nvme_quirks.o 00:03:06.846 SYMLINK libspdk_thread.so 00:03:06.846 CC lib/nvme/nvme_transport.o 00:03:07.104 CC lib/nvme/nvme_discovery.o 00:03:07.104 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:07.104 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:07.104 CC lib/nvme/nvme_tcp.o 00:03:07.104 CC lib/nvme/nvme_opal.o 00:03:07.104 CC lib/nvme/nvme_io_msg.o 00:03:07.104 CC lib/nvme/nvme_poll_group.o 00:03:07.361 CC lib/nvme/nvme_zns.o 00:03:07.361 CC lib/nvme/nvme_stubs.o 00:03:07.618 CC lib/nvme/nvme_auth.o 00:03:07.619 CC lib/nvme/nvme_cuse.o 00:03:07.876 CC lib/accel/accel.o 00:03:07.876 CC lib/nvme/nvme_rdma.o 00:03:07.876 CC lib/accel/accel_rpc.o 00:03:07.876 CC lib/init/json_config.o 00:03:07.876 CC lib/blob/blobstore.o 00:03:07.876 CC lib/virtio/virtio.o 00:03:07.876 CC lib/virtio/virtio_vhost_user.o 00:03:08.134 CC lib/init/subsystem.o 00:03:08.134 CC lib/init/subsystem_rpc.o 00:03:08.134 CC lib/blob/request.o 00:03:08.134 CC lib/virtio/virtio_vfio_user.o 00:03:08.134 CC lib/virtio/virtio_pci.o 00:03:08.134 CC lib/init/rpc.o 00:03:08.392 CC lib/blob/zeroes.o 00:03:08.392 CC lib/blob/blob_bs_dev.o 00:03:08.392 CC lib/accel/accel_sw.o 00:03:08.392 LIB libspdk_init.a 00:03:08.392 SO libspdk_init.so.6.0 00:03:08.392 SYMLINK libspdk_init.so 00:03:08.392 LIB libspdk_virtio.a 00:03:08.392 SO libspdk_virtio.so.7.0 00:03:08.649 SYMLINK libspdk_virtio.so 00:03:08.649 CC lib/fsdev/fsdev_rpc.o 00:03:08.649 CC lib/fsdev/fsdev_io.o 00:03:08.649 CC lib/fsdev/fsdev.o 00:03:08.649 CC lib/event/app.o 00:03:08.649 CC lib/event/reactor.o 00:03:08.649 CC lib/event/log_rpc.o 00:03:08.649 CC lib/event/app_rpc.o 00:03:08.649 CC lib/event/scheduler_static.o 00:03:08.649 LIB libspdk_accel.a 00:03:08.649 SO libspdk_accel.so.16.0 00:03:08.649 LIB libspdk_nvme.a 00:03:08.649 SYMLINK libspdk_accel.so 00:03:08.907 SO libspdk_nvme.so.15.0 00:03:08.907 LIB libspdk_event.a 00:03:08.907 CC lib/bdev/bdev.o 00:03:08.907 CC lib/bdev/bdev_rpc.o 00:03:08.907 CC lib/bdev/bdev_zone.o 00:03:08.907 CC lib/bdev/part.o 00:03:08.907 CC lib/bdev/scsi_nvme.o 00:03:08.907 SO libspdk_event.so.14.0 00:03:08.907 SYMLINK libspdk_nvme.so 00:03:08.907 SYMLINK libspdk_event.so 00:03:09.165 LIB libspdk_fsdev.a 00:03:09.165 SO libspdk_fsdev.so.2.0 00:03:09.165 SYMLINK libspdk_fsdev.so 00:03:09.422 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:09.987 LIB libspdk_fuse_dispatcher.a 00:03:09.987 SO libspdk_fuse_dispatcher.so.1.0 00:03:09.987 SYMLINK libspdk_fuse_dispatcher.so 00:03:10.552 LIB libspdk_blob.a 00:03:10.552 SO libspdk_blob.so.12.0 00:03:10.552 SYMLINK libspdk_blob.so 00:03:10.809 CC lib/blobfs/blobfs.o 00:03:10.809 CC lib/blobfs/tree.o 00:03:10.809 CC lib/lvol/lvol.o 00:03:11.065 LIB libspdk_bdev.a 00:03:11.065 SO libspdk_bdev.so.17.0 00:03:11.065 SYMLINK libspdk_bdev.so 00:03:11.322 CC lib/nbd/nbd.o 00:03:11.322 CC lib/nvmf/ctrlr_discovery.o 00:03:11.322 CC lib/nvmf/ctrlr.o 00:03:11.322 CC lib/nvmf/subsystem.o 00:03:11.322 CC lib/nvmf/ctrlr_bdev.o 00:03:11.322 CC lib/ublk/ublk.o 00:03:11.322 CC lib/ftl/ftl_core.o 00:03:11.322 CC lib/scsi/dev.o 00:03:11.322 LIB libspdk_blobfs.a 00:03:11.322 SO libspdk_blobfs.so.11.0 00:03:11.578 CC lib/scsi/lun.o 00:03:11.578 SYMLINK libspdk_blobfs.so 00:03:11.578 CC lib/scsi/port.o 00:03:11.578 LIB libspdk_lvol.a 00:03:11.578 SO libspdk_lvol.so.11.0 00:03:11.578 CC lib/ftl/ftl_init.o 00:03:11.578 CC lib/nbd/nbd_rpc.o 00:03:11.578 SYMLINK libspdk_lvol.so 00:03:11.578 CC lib/ftl/ftl_layout.o 00:03:11.578 CC lib/ftl/ftl_debug.o 00:03:11.578 CC lib/ftl/ftl_io.o 00:03:11.847 CC lib/scsi/scsi.o 00:03:11.847 LIB libspdk_nbd.a 00:03:11.847 SO libspdk_nbd.so.7.0 00:03:11.847 CC lib/ublk/ublk_rpc.o 00:03:11.847 CC lib/ftl/ftl_sb.o 00:03:11.847 CC lib/nvmf/nvmf.o 00:03:11.847 SYMLINK libspdk_nbd.so 00:03:11.847 CC lib/ftl/ftl_l2p.o 00:03:11.847 CC lib/ftl/ftl_l2p_flat.o 00:03:11.847 CC lib/scsi/scsi_bdev.o 00:03:11.847 CC lib/scsi/scsi_pr.o 00:03:11.847 CC lib/ftl/ftl_nv_cache.o 00:03:11.847 LIB libspdk_ublk.a 00:03:11.847 SO libspdk_ublk.so.3.0 00:03:12.105 CC lib/nvmf/nvmf_rpc.o 00:03:12.105 CC lib/nvmf/transport.o 00:03:12.105 SYMLINK libspdk_ublk.so 00:03:12.105 CC lib/nvmf/tcp.o 00:03:12.105 CC lib/ftl/ftl_band.o 00:03:12.105 CC lib/nvmf/stubs.o 00:03:12.105 CC lib/scsi/scsi_rpc.o 00:03:12.361 CC lib/scsi/task.o 00:03:12.361 CC lib/ftl/ftl_band_ops.o 00:03:12.361 CC lib/ftl/ftl_writer.o 00:03:12.361 CC lib/nvmf/mdns_server.o 00:03:12.361 LIB libspdk_scsi.a 00:03:12.618 CC lib/nvmf/rdma.o 00:03:12.618 SO libspdk_scsi.so.9.0 00:03:12.618 CC lib/nvmf/auth.o 00:03:12.618 CC lib/ftl/ftl_rq.o 00:03:12.618 CC lib/ftl/ftl_reloc.o 00:03:12.618 SYMLINK libspdk_scsi.so 00:03:12.618 CC lib/ftl/ftl_l2p_cache.o 00:03:12.618 CC lib/ftl/ftl_p2l.o 00:03:12.619 CC lib/ftl/ftl_p2l_log.o 00:03:12.619 CC lib/ftl/mngt/ftl_mngt.o 00:03:12.619 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:12.619 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:12.876 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:12.876 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:12.876 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:12.876 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:12.876 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:12.876 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:12.876 CC lib/iscsi/conn.o 00:03:13.132 CC lib/iscsi/init_grp.o 00:03:13.132 CC lib/iscsi/iscsi.o 00:03:13.132 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:13.132 CC lib/iscsi/param.o 00:03:13.132 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:13.132 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:13.132 CC lib/iscsi/portal_grp.o 00:03:13.132 CC lib/vhost/vhost.o 00:03:13.132 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:13.443 CC lib/vhost/vhost_rpc.o 00:03:13.443 CC lib/iscsi/tgt_node.o 00:03:13.443 CC lib/iscsi/iscsi_subsystem.o 00:03:13.443 CC lib/vhost/vhost_scsi.o 00:03:13.443 CC lib/iscsi/iscsi_rpc.o 00:03:13.443 CC lib/iscsi/task.o 00:03:13.443 CC lib/ftl/utils/ftl_conf.o 00:03:13.703 CC lib/ftl/utils/ftl_md.o 00:03:13.703 CC lib/vhost/vhost_blk.o 00:03:13.703 CC lib/vhost/rte_vhost_user.o 00:03:13.703 CC lib/ftl/utils/ftl_mempool.o 00:03:13.703 CC lib/ftl/utils/ftl_bitmap.o 00:03:13.703 CC lib/ftl/utils/ftl_property.o 00:03:13.703 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:14.032 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:14.032 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:14.032 LIB libspdk_iscsi.a 00:03:14.032 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:14.032 SO libspdk_iscsi.so.8.0 00:03:14.032 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:14.032 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:14.032 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:14.032 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:14.032 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:14.032 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:14.032 SYMLINK libspdk_iscsi.so 00:03:14.289 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:14.289 LIB libspdk_nvmf.a 00:03:14.289 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:14.289 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:14.289 CC lib/ftl/base/ftl_base_dev.o 00:03:14.289 CC lib/ftl/base/ftl_base_bdev.o 00:03:14.289 CC lib/ftl/ftl_trace.o 00:03:14.289 SO libspdk_nvmf.so.20.0 00:03:14.289 LIB libspdk_vhost.a 00:03:14.289 SO libspdk_vhost.so.8.0 00:03:14.547 SYMLINK libspdk_nvmf.so 00:03:14.547 SYMLINK libspdk_vhost.so 00:03:14.547 LIB libspdk_ftl.a 00:03:14.803 SO libspdk_ftl.so.9.0 00:03:14.803 SYMLINK libspdk_ftl.so 00:03:15.059 CC module/env_dpdk/env_dpdk_rpc.o 00:03:15.316 CC module/blob/bdev/blob_bdev.o 00:03:15.316 CC module/sock/posix/posix.o 00:03:15.316 CC module/keyring/file/keyring.o 00:03:15.316 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:15.316 CC module/sock/uring/uring.o 00:03:15.316 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:15.316 CC module/scheduler/gscheduler/gscheduler.o 00:03:15.316 CC module/fsdev/aio/fsdev_aio.o 00:03:15.316 CC module/accel/error/accel_error.o 00:03:15.316 LIB libspdk_env_dpdk_rpc.a 00:03:15.316 SO libspdk_env_dpdk_rpc.so.6.0 00:03:15.316 SYMLINK libspdk_env_dpdk_rpc.so 00:03:15.316 CC module/accel/error/accel_error_rpc.o 00:03:15.316 CC module/keyring/file/keyring_rpc.o 00:03:15.316 LIB libspdk_scheduler_dynamic.a 00:03:15.316 LIB libspdk_scheduler_dpdk_governor.a 00:03:15.316 SO libspdk_scheduler_dynamic.so.4.0 00:03:15.316 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:15.316 LIB libspdk_scheduler_gscheduler.a 00:03:15.316 LIB libspdk_accel_error.a 00:03:15.316 LIB libspdk_keyring_file.a 00:03:15.316 SYMLINK libspdk_scheduler_dynamic.so 00:03:15.316 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:15.316 SO libspdk_scheduler_gscheduler.so.4.0 00:03:15.316 SO libspdk_accel_error.so.2.0 00:03:15.316 SO libspdk_keyring_file.so.2.0 00:03:15.573 LIB libspdk_blob_bdev.a 00:03:15.574 SO libspdk_blob_bdev.so.12.0 00:03:15.574 SYMLINK libspdk_keyring_file.so 00:03:15.574 SYMLINK libspdk_scheduler_gscheduler.so 00:03:15.574 SYMLINK libspdk_accel_error.so 00:03:15.574 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:15.574 CC module/fsdev/aio/linux_aio_mgr.o 00:03:15.574 SYMLINK libspdk_blob_bdev.so 00:03:15.574 CC module/accel/iaa/accel_iaa.o 00:03:15.574 CC module/accel/ioat/accel_ioat.o 00:03:15.574 CC module/accel/dsa/accel_dsa.o 00:03:15.574 CC module/keyring/linux/keyring.o 00:03:15.574 CC module/accel/dsa/accel_dsa_rpc.o 00:03:15.574 CC module/keyring/linux/keyring_rpc.o 00:03:15.574 CC module/accel/iaa/accel_iaa_rpc.o 00:03:15.574 CC module/accel/ioat/accel_ioat_rpc.o 00:03:15.831 CC module/bdev/delay/vbdev_delay.o 00:03:15.831 LIB libspdk_keyring_linux.a 00:03:15.831 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:15.831 LIB libspdk_sock_uring.a 00:03:15.831 LIB libspdk_accel_dsa.a 00:03:15.831 SO libspdk_sock_uring.so.5.0 00:03:15.831 SO libspdk_keyring_linux.so.1.0 00:03:15.831 LIB libspdk_accel_iaa.a 00:03:15.831 LIB libspdk_sock_posix.a 00:03:15.831 LIB libspdk_accel_ioat.a 00:03:15.831 SO libspdk_accel_dsa.so.5.0 00:03:15.831 SO libspdk_accel_ioat.so.6.0 00:03:15.831 SO libspdk_accel_iaa.so.3.0 00:03:15.831 SO libspdk_sock_posix.so.6.0 00:03:15.831 SYMLINK libspdk_sock_uring.so 00:03:15.831 SYMLINK libspdk_keyring_linux.so 00:03:15.831 LIB libspdk_fsdev_aio.a 00:03:15.831 SYMLINK libspdk_accel_dsa.so 00:03:15.831 SYMLINK libspdk_accel_ioat.so 00:03:15.831 SO libspdk_fsdev_aio.so.1.0 00:03:15.831 SYMLINK libspdk_sock_posix.so 00:03:15.831 SYMLINK libspdk_accel_iaa.so 00:03:16.088 CC module/blobfs/bdev/blobfs_bdev.o 00:03:16.088 SYMLINK libspdk_fsdev_aio.so 00:03:16.088 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:16.088 CC module/bdev/error/vbdev_error.o 00:03:16.088 CC module/bdev/lvol/vbdev_lvol.o 00:03:16.088 CC module/bdev/gpt/gpt.o 00:03:16.088 CC module/bdev/malloc/bdev_malloc.o 00:03:16.088 LIB libspdk_bdev_delay.a 00:03:16.088 CC module/bdev/null/bdev_null.o 00:03:16.088 SO libspdk_bdev_delay.so.6.0 00:03:16.088 CC module/bdev/nvme/bdev_nvme.o 00:03:16.088 CC module/bdev/passthru/vbdev_passthru.o 00:03:16.088 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:16.088 LIB libspdk_blobfs_bdev.a 00:03:16.088 SYMLINK libspdk_bdev_delay.so 00:03:16.088 SO libspdk_blobfs_bdev.so.6.0 00:03:16.347 SYMLINK libspdk_blobfs_bdev.so 00:03:16.347 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:16.347 CC module/bdev/gpt/vbdev_gpt.o 00:03:16.347 CC module/bdev/error/vbdev_error_rpc.o 00:03:16.347 CC module/bdev/null/bdev_null_rpc.o 00:03:16.347 CC module/bdev/raid/bdev_raid.o 00:03:16.347 LIB libspdk_bdev_passthru.a 00:03:16.347 SO libspdk_bdev_passthru.so.6.0 00:03:16.347 CC module/bdev/split/vbdev_split.o 00:03:16.347 LIB libspdk_bdev_null.a 00:03:16.347 LIB libspdk_bdev_gpt.a 00:03:16.347 SYMLINK libspdk_bdev_passthru.so 00:03:16.347 CC module/bdev/split/vbdev_split_rpc.o 00:03:16.604 SO libspdk_bdev_null.so.6.0 00:03:16.604 SO libspdk_bdev_gpt.so.6.0 00:03:16.604 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:16.604 LIB libspdk_bdev_error.a 00:03:16.604 SYMLINK libspdk_bdev_gpt.so 00:03:16.604 SYMLINK libspdk_bdev_null.so 00:03:16.604 LIB libspdk_bdev_lvol.a 00:03:16.604 CC module/bdev/raid/bdev_raid_rpc.o 00:03:16.604 SO libspdk_bdev_error.so.6.0 00:03:16.604 SO libspdk_bdev_lvol.so.6.0 00:03:16.604 CC module/bdev/raid/bdev_raid_sb.o 00:03:16.604 SYMLINK libspdk_bdev_error.so 00:03:16.604 LIB libspdk_bdev_malloc.a 00:03:16.604 CC module/bdev/raid/raid0.o 00:03:16.604 SYMLINK libspdk_bdev_lvol.so 00:03:16.604 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:16.604 SO libspdk_bdev_malloc.so.6.0 00:03:16.604 LIB libspdk_bdev_split.a 00:03:16.604 CC module/bdev/uring/bdev_uring.o 00:03:16.604 SO libspdk_bdev_split.so.6.0 00:03:16.604 CC module/bdev/uring/bdev_uring_rpc.o 00:03:16.604 SYMLINK libspdk_bdev_split.so 00:03:16.604 SYMLINK libspdk_bdev_malloc.so 00:03:16.604 CC module/bdev/raid/raid1.o 00:03:16.861 CC module/bdev/aio/bdev_aio.o 00:03:16.861 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:16.861 CC module/bdev/aio/bdev_aio_rpc.o 00:03:16.861 CC module/bdev/raid/concat.o 00:03:16.861 CC module/bdev/ftl/bdev_ftl.o 00:03:16.861 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:16.861 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:17.118 LIB libspdk_bdev_zone_block.a 00:03:17.118 SO libspdk_bdev_zone_block.so.6.0 00:03:17.118 CC module/bdev/nvme/nvme_rpc.o 00:03:17.118 LIB libspdk_bdev_uring.a 00:03:17.118 LIB libspdk_bdev_ftl.a 00:03:17.118 CC module/bdev/iscsi/bdev_iscsi.o 00:03:17.118 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:17.118 SO libspdk_bdev_uring.so.6.0 00:03:17.118 SO libspdk_bdev_ftl.so.6.0 00:03:17.118 SYMLINK libspdk_bdev_zone_block.so 00:03:17.118 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:17.118 LIB libspdk_bdev_raid.a 00:03:17.118 SYMLINK libspdk_bdev_ftl.so 00:03:17.118 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:17.118 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:17.118 SO libspdk_bdev_raid.so.6.0 00:03:17.118 LIB libspdk_bdev_aio.a 00:03:17.118 SYMLINK libspdk_bdev_uring.so 00:03:17.118 CC module/bdev/nvme/bdev_mdns_client.o 00:03:17.118 SO libspdk_bdev_aio.so.6.0 00:03:17.374 SYMLINK libspdk_bdev_raid.so 00:03:17.374 CC module/bdev/nvme/vbdev_opal.o 00:03:17.374 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:17.374 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:17.374 SYMLINK libspdk_bdev_aio.so 00:03:17.374 LIB libspdk_bdev_iscsi.a 00:03:17.374 SO libspdk_bdev_iscsi.so.6.0 00:03:17.374 LIB libspdk_bdev_virtio.a 00:03:17.631 SO libspdk_bdev_virtio.so.6.0 00:03:17.631 SYMLINK libspdk_bdev_iscsi.so 00:03:17.631 SYMLINK libspdk_bdev_virtio.so 00:03:18.561 LIB libspdk_bdev_nvme.a 00:03:18.561 SO libspdk_bdev_nvme.so.7.1 00:03:18.818 SYMLINK libspdk_bdev_nvme.so 00:03:19.074 CC module/event/subsystems/iobuf/iobuf.o 00:03:19.074 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:19.074 CC module/event/subsystems/sock/sock.o 00:03:19.074 CC module/event/subsystems/scheduler/scheduler.o 00:03:19.074 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:19.074 CC module/event/subsystems/keyring/keyring.o 00:03:19.074 CC module/event/subsystems/fsdev/fsdev.o 00:03:19.074 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:19.074 CC module/event/subsystems/vmd/vmd.o 00:03:19.074 LIB libspdk_event_vhost_blk.a 00:03:19.074 LIB libspdk_event_sock.a 00:03:19.074 LIB libspdk_event_keyring.a 00:03:19.074 SO libspdk_event_vhost_blk.so.3.0 00:03:19.332 SO libspdk_event_sock.so.5.0 00:03:19.332 LIB libspdk_event_fsdev.a 00:03:19.332 SO libspdk_event_keyring.so.1.0 00:03:19.332 SO libspdk_event_fsdev.so.1.0 00:03:19.332 SYMLINK libspdk_event_vhost_blk.so 00:03:19.332 SYMLINK libspdk_event_sock.so 00:03:19.332 LIB libspdk_event_iobuf.a 00:03:19.332 LIB libspdk_event_scheduler.a 00:03:19.332 SYMLINK libspdk_event_keyring.so 00:03:19.332 LIB libspdk_event_vmd.a 00:03:19.332 SYMLINK libspdk_event_fsdev.so 00:03:19.332 SO libspdk_event_scheduler.so.4.0 00:03:19.332 SO libspdk_event_iobuf.so.3.0 00:03:19.332 SO libspdk_event_vmd.so.6.0 00:03:19.332 SYMLINK libspdk_event_scheduler.so 00:03:19.332 SYMLINK libspdk_event_iobuf.so 00:03:19.332 SYMLINK libspdk_event_vmd.so 00:03:19.589 CC module/event/subsystems/accel/accel.o 00:03:19.589 LIB libspdk_event_accel.a 00:03:19.589 SO libspdk_event_accel.so.6.0 00:03:19.589 SYMLINK libspdk_event_accel.so 00:03:19.846 CC module/event/subsystems/bdev/bdev.o 00:03:20.192 LIB libspdk_event_bdev.a 00:03:20.192 SO libspdk_event_bdev.so.6.0 00:03:20.192 SYMLINK libspdk_event_bdev.so 00:03:20.192 CC module/event/subsystems/scsi/scsi.o 00:03:20.192 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:20.192 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:20.192 CC module/event/subsystems/ublk/ublk.o 00:03:20.463 CC module/event/subsystems/nbd/nbd.o 00:03:20.463 LIB libspdk_event_ublk.a 00:03:20.463 SO libspdk_event_ublk.so.3.0 00:03:20.463 LIB libspdk_event_nbd.a 00:03:20.463 LIB libspdk_event_scsi.a 00:03:20.463 SYMLINK libspdk_event_ublk.so 00:03:20.463 SO libspdk_event_nbd.so.6.0 00:03:20.463 SO libspdk_event_scsi.so.6.0 00:03:20.463 LIB libspdk_event_nvmf.a 00:03:20.463 SYMLINK libspdk_event_scsi.so 00:03:20.463 SO libspdk_event_nvmf.so.6.0 00:03:20.463 SYMLINK libspdk_event_nbd.so 00:03:20.463 SYMLINK libspdk_event_nvmf.so 00:03:20.720 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:20.720 CC module/event/subsystems/iscsi/iscsi.o 00:03:20.978 LIB libspdk_event_vhost_scsi.a 00:03:20.978 SO libspdk_event_vhost_scsi.so.3.0 00:03:20.978 LIB libspdk_event_iscsi.a 00:03:20.978 SO libspdk_event_iscsi.so.6.0 00:03:20.978 SYMLINK libspdk_event_vhost_scsi.so 00:03:20.978 SYMLINK libspdk_event_iscsi.so 00:03:20.978 SO libspdk.so.6.0 00:03:20.978 SYMLINK libspdk.so 00:03:21.235 CC app/trace_record/trace_record.o 00:03:21.235 CXX app/trace/trace.o 00:03:21.235 TEST_HEADER include/spdk/accel.h 00:03:21.235 TEST_HEADER include/spdk/accel_module.h 00:03:21.235 TEST_HEADER include/spdk/assert.h 00:03:21.235 TEST_HEADER include/spdk/barrier.h 00:03:21.235 TEST_HEADER include/spdk/base64.h 00:03:21.235 TEST_HEADER include/spdk/bdev.h 00:03:21.235 TEST_HEADER include/spdk/bdev_module.h 00:03:21.235 TEST_HEADER include/spdk/bdev_zone.h 00:03:21.235 TEST_HEADER include/spdk/bit_array.h 00:03:21.235 TEST_HEADER include/spdk/bit_pool.h 00:03:21.235 TEST_HEADER include/spdk/blob_bdev.h 00:03:21.235 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:21.235 TEST_HEADER include/spdk/blobfs.h 00:03:21.235 TEST_HEADER include/spdk/blob.h 00:03:21.235 TEST_HEADER include/spdk/conf.h 00:03:21.235 TEST_HEADER include/spdk/config.h 00:03:21.235 TEST_HEADER include/spdk/cpuset.h 00:03:21.235 TEST_HEADER include/spdk/crc16.h 00:03:21.235 TEST_HEADER include/spdk/crc32.h 00:03:21.235 TEST_HEADER include/spdk/crc64.h 00:03:21.235 TEST_HEADER include/spdk/dif.h 00:03:21.235 TEST_HEADER include/spdk/dma.h 00:03:21.235 TEST_HEADER include/spdk/endian.h 00:03:21.235 TEST_HEADER include/spdk/env_dpdk.h 00:03:21.235 TEST_HEADER include/spdk/env.h 00:03:21.235 TEST_HEADER include/spdk/event.h 00:03:21.235 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:21.235 TEST_HEADER include/spdk/fd_group.h 00:03:21.235 TEST_HEADER include/spdk/fd.h 00:03:21.235 TEST_HEADER include/spdk/file.h 00:03:21.235 CC examples/ioat/perf/perf.o 00:03:21.235 TEST_HEADER include/spdk/fsdev.h 00:03:21.235 TEST_HEADER include/spdk/fsdev_module.h 00:03:21.235 TEST_HEADER include/spdk/ftl.h 00:03:21.235 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:21.235 CC test/thread/poller_perf/poller_perf.o 00:03:21.235 TEST_HEADER include/spdk/gpt_spec.h 00:03:21.235 CC examples/util/zipf/zipf.o 00:03:21.235 TEST_HEADER include/spdk/hexlify.h 00:03:21.235 TEST_HEADER include/spdk/histogram_data.h 00:03:21.235 TEST_HEADER include/spdk/idxd.h 00:03:21.235 TEST_HEADER include/spdk/idxd_spec.h 00:03:21.235 TEST_HEADER include/spdk/init.h 00:03:21.236 TEST_HEADER include/spdk/ioat.h 00:03:21.236 TEST_HEADER include/spdk/ioat_spec.h 00:03:21.236 TEST_HEADER include/spdk/iscsi_spec.h 00:03:21.236 TEST_HEADER include/spdk/json.h 00:03:21.236 TEST_HEADER include/spdk/jsonrpc.h 00:03:21.236 TEST_HEADER include/spdk/keyring.h 00:03:21.236 TEST_HEADER include/spdk/keyring_module.h 00:03:21.236 TEST_HEADER include/spdk/likely.h 00:03:21.493 TEST_HEADER include/spdk/log.h 00:03:21.493 TEST_HEADER include/spdk/lvol.h 00:03:21.493 TEST_HEADER include/spdk/md5.h 00:03:21.493 TEST_HEADER include/spdk/memory.h 00:03:21.493 TEST_HEADER include/spdk/mmio.h 00:03:21.493 TEST_HEADER include/spdk/nbd.h 00:03:21.493 TEST_HEADER include/spdk/net.h 00:03:21.493 TEST_HEADER include/spdk/notify.h 00:03:21.493 TEST_HEADER include/spdk/nvme.h 00:03:21.493 TEST_HEADER include/spdk/nvme_intel.h 00:03:21.493 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:21.493 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:21.493 TEST_HEADER include/spdk/nvme_spec.h 00:03:21.493 CC test/dma/test_dma/test_dma.o 00:03:21.493 TEST_HEADER include/spdk/nvme_zns.h 00:03:21.493 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:21.493 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:21.493 TEST_HEADER include/spdk/nvmf.h 00:03:21.493 TEST_HEADER include/spdk/nvmf_spec.h 00:03:21.493 TEST_HEADER include/spdk/nvmf_transport.h 00:03:21.493 TEST_HEADER include/spdk/opal.h 00:03:21.493 TEST_HEADER include/spdk/opal_spec.h 00:03:21.493 TEST_HEADER include/spdk/pci_ids.h 00:03:21.493 TEST_HEADER include/spdk/pipe.h 00:03:21.493 TEST_HEADER include/spdk/queue.h 00:03:21.493 TEST_HEADER include/spdk/reduce.h 00:03:21.493 TEST_HEADER include/spdk/rpc.h 00:03:21.493 TEST_HEADER include/spdk/scheduler.h 00:03:21.493 TEST_HEADER include/spdk/scsi.h 00:03:21.493 TEST_HEADER include/spdk/scsi_spec.h 00:03:21.493 TEST_HEADER include/spdk/sock.h 00:03:21.493 TEST_HEADER include/spdk/stdinc.h 00:03:21.493 TEST_HEADER include/spdk/string.h 00:03:21.493 TEST_HEADER include/spdk/thread.h 00:03:21.493 CC test/app/bdev_svc/bdev_svc.o 00:03:21.493 TEST_HEADER include/spdk/trace.h 00:03:21.493 TEST_HEADER include/spdk/trace_parser.h 00:03:21.493 TEST_HEADER include/spdk/tree.h 00:03:21.493 CC test/env/mem_callbacks/mem_callbacks.o 00:03:21.493 TEST_HEADER include/spdk/ublk.h 00:03:21.493 TEST_HEADER include/spdk/util.h 00:03:21.493 TEST_HEADER include/spdk/uuid.h 00:03:21.493 TEST_HEADER include/spdk/version.h 00:03:21.493 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:21.493 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:21.493 TEST_HEADER include/spdk/vhost.h 00:03:21.493 TEST_HEADER include/spdk/vmd.h 00:03:21.493 TEST_HEADER include/spdk/xor.h 00:03:21.493 LINK spdk_trace_record 00:03:21.493 TEST_HEADER include/spdk/zipf.h 00:03:21.493 CXX test/cpp_headers/accel.o 00:03:21.493 LINK poller_perf 00:03:21.493 LINK zipf 00:03:21.493 LINK interrupt_tgt 00:03:21.493 CXX test/cpp_headers/accel_module.o 00:03:21.493 LINK bdev_svc 00:03:21.493 CXX test/cpp_headers/assert.o 00:03:21.751 LINK ioat_perf 00:03:21.751 CC test/app/histogram_perf/histogram_perf.o 00:03:21.751 LINK spdk_trace 00:03:21.751 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:21.751 CXX test/cpp_headers/barrier.o 00:03:21.751 CC test/rpc_client/rpc_client_test.o 00:03:21.751 LINK histogram_perf 00:03:21.751 LINK mem_callbacks 00:03:21.751 CC test/app/jsoncat/jsoncat.o 00:03:21.751 CC examples/ioat/verify/verify.o 00:03:22.008 CXX test/cpp_headers/base64.o 00:03:22.008 CXX test/cpp_headers/bdev.o 00:03:22.008 LINK test_dma 00:03:22.008 CC examples/thread/thread/thread_ex.o 00:03:22.008 LINK rpc_client_test 00:03:22.008 CC app/nvmf_tgt/nvmf_main.o 00:03:22.008 LINK jsoncat 00:03:22.008 CC test/env/vtophys/vtophys.o 00:03:22.008 CXX test/cpp_headers/bdev_module.o 00:03:22.008 CXX test/cpp_headers/bdev_zone.o 00:03:22.008 LINK nvme_fuzz 00:03:22.008 CXX test/cpp_headers/bit_array.o 00:03:22.008 LINK vtophys 00:03:22.008 LINK nvmf_tgt 00:03:22.008 LINK thread 00:03:22.266 LINK verify 00:03:22.266 CC test/event/event_perf/event_perf.o 00:03:22.266 CXX test/cpp_headers/bit_pool.o 00:03:22.266 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:22.266 CXX test/cpp_headers/blob_bdev.o 00:03:22.266 CC test/accel/dif/dif.o 00:03:22.266 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:22.266 LINK event_perf 00:03:22.266 CC test/blobfs/mkfs/mkfs.o 00:03:22.266 CC app/iscsi_tgt/iscsi_tgt.o 00:03:22.523 LINK env_dpdk_post_init 00:03:22.523 CXX test/cpp_headers/blobfs_bdev.o 00:03:22.523 CC examples/sock/hello_world/hello_sock.o 00:03:22.523 CC test/lvol/esnap/esnap.o 00:03:22.523 CC test/event/reactor/reactor.o 00:03:22.523 CC examples/vmd/lsvmd/lsvmd.o 00:03:22.523 LINK mkfs 00:03:22.523 LINK iscsi_tgt 00:03:22.523 CXX test/cpp_headers/blobfs.o 00:03:22.523 LINK hello_sock 00:03:22.523 CC test/env/memory/memory_ut.o 00:03:22.780 LINK reactor 00:03:22.780 CXX test/cpp_headers/blob.o 00:03:22.780 LINK lsvmd 00:03:22.780 CXX test/cpp_headers/conf.o 00:03:22.780 LINK dif 00:03:22.780 CC examples/idxd/perf/perf.o 00:03:22.780 CC test/nvme/aer/aer.o 00:03:22.780 CC test/event/reactor_perf/reactor_perf.o 00:03:22.780 CC app/spdk_tgt/spdk_tgt.o 00:03:23.037 CC examples/vmd/led/led.o 00:03:23.037 CXX test/cpp_headers/config.o 00:03:23.037 CXX test/cpp_headers/cpuset.o 00:03:23.037 LINK reactor_perf 00:03:23.037 LINK led 00:03:23.037 LINK spdk_tgt 00:03:23.037 LINK idxd_perf 00:03:23.037 CXX test/cpp_headers/crc16.o 00:03:23.037 LINK aer 00:03:23.295 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:23.295 CC test/event/app_repeat/app_repeat.o 00:03:23.295 CXX test/cpp_headers/crc32.o 00:03:23.295 CC app/spdk_lspci/spdk_lspci.o 00:03:23.295 CC test/nvme/reset/reset.o 00:03:23.295 LINK app_repeat 00:03:23.295 CC test/bdev/bdevio/bdevio.o 00:03:23.295 CC examples/accel/perf/accel_perf.o 00:03:23.295 CXX test/cpp_headers/crc64.o 00:03:23.552 LINK hello_fsdev 00:03:23.552 LINK spdk_lspci 00:03:23.552 LINK memory_ut 00:03:23.552 LINK reset 00:03:23.552 CC test/event/scheduler/scheduler.o 00:03:23.552 CXX test/cpp_headers/dif.o 00:03:23.552 CXX test/cpp_headers/dma.o 00:03:23.810 CC app/spdk_nvme_perf/perf.o 00:03:23.810 CC test/env/pci/pci_ut.o 00:03:23.810 CC test/nvme/sgl/sgl.o 00:03:23.810 LINK bdevio 00:03:23.810 LINK scheduler 00:03:23.810 CC examples/blob/hello_world/hello_blob.o 00:03:23.810 LINK accel_perf 00:03:23.810 CXX test/cpp_headers/endian.o 00:03:24.069 CXX test/cpp_headers/env_dpdk.o 00:03:24.069 LINK pci_ut 00:03:24.069 CC app/spdk_nvme_identify/identify.o 00:03:24.069 LINK hello_blob 00:03:24.069 LINK iscsi_fuzz 00:03:24.069 CC app/spdk_nvme_discover/discovery_aer.o 00:03:24.069 LINK sgl 00:03:24.069 CXX test/cpp_headers/env.o 00:03:24.069 CC examples/nvme/hello_world/hello_world.o 00:03:24.069 LINK spdk_nvme_discover 00:03:24.069 CXX test/cpp_headers/event.o 00:03:24.069 CC examples/blob/cli/blobcli.o 00:03:24.329 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:24.329 CC test/nvme/e2edp/nvme_dp.o 00:03:24.329 LINK spdk_nvme_perf 00:03:24.329 LINK hello_world 00:03:24.329 CC examples/bdev/hello_world/hello_bdev.o 00:03:24.329 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:24.329 CXX test/cpp_headers/fd_group.o 00:03:24.329 CC test/nvme/overhead/overhead.o 00:03:24.587 CXX test/cpp_headers/fd.o 00:03:24.587 LINK nvme_dp 00:03:24.587 CC examples/nvme/reconnect/reconnect.o 00:03:24.587 CC app/spdk_top/spdk_top.o 00:03:24.587 LINK blobcli 00:03:24.587 LINK hello_bdev 00:03:24.587 CXX test/cpp_headers/file.o 00:03:24.587 LINK spdk_nvme_identify 00:03:24.587 LINK vhost_fuzz 00:03:24.587 CXX test/cpp_headers/fsdev.o 00:03:24.587 LINK overhead 00:03:24.843 CC test/app/stub/stub.o 00:03:24.843 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:24.843 CXX test/cpp_headers/fsdev_module.o 00:03:24.843 LINK reconnect 00:03:24.843 CC examples/nvme/arbitration/arbitration.o 00:03:24.843 CC examples/nvme/hotplug/hotplug.o 00:03:24.843 CC examples/bdev/bdevperf/bdevperf.o 00:03:24.843 LINK stub 00:03:24.843 CC test/nvme/err_injection/err_injection.o 00:03:24.843 CXX test/cpp_headers/ftl.o 00:03:25.100 CC test/nvme/startup/startup.o 00:03:25.100 LINK arbitration 00:03:25.100 LINK hotplug 00:03:25.100 LINK err_injection 00:03:25.100 LINK spdk_top 00:03:25.100 CC test/nvme/reserve/reserve.o 00:03:25.100 CXX test/cpp_headers/fuse_dispatcher.o 00:03:25.100 CXX test/cpp_headers/gpt_spec.o 00:03:25.100 LINK startup 00:03:25.100 LINK nvme_manage 00:03:25.358 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:25.358 CC test/nvme/simple_copy/simple_copy.o 00:03:25.358 LINK reserve 00:03:25.358 CC app/vhost/vhost.o 00:03:25.358 CXX test/cpp_headers/hexlify.o 00:03:25.358 CC test/nvme/connect_stress/connect_stress.o 00:03:25.358 LINK cmb_copy 00:03:25.358 CC app/spdk_dd/spdk_dd.o 00:03:25.358 CC test/nvme/boot_partition/boot_partition.o 00:03:25.358 CXX test/cpp_headers/histogram_data.o 00:03:25.358 LINK vhost 00:03:25.358 LINK simple_copy 00:03:25.358 CC test/nvme/compliance/nvme_compliance.o 00:03:25.616 LINK bdevperf 00:03:25.616 LINK connect_stress 00:03:25.616 LINK boot_partition 00:03:25.616 CXX test/cpp_headers/idxd.o 00:03:25.616 CC examples/nvme/abort/abort.o 00:03:25.616 CXX test/cpp_headers/idxd_spec.o 00:03:25.616 CXX test/cpp_headers/init.o 00:03:25.616 CXX test/cpp_headers/ioat.o 00:03:25.616 CXX test/cpp_headers/ioat_spec.o 00:03:25.873 LINK nvme_compliance 00:03:25.873 LINK spdk_dd 00:03:25.873 CXX test/cpp_headers/iscsi_spec.o 00:03:25.873 CC test/nvme/fused_ordering/fused_ordering.o 00:03:25.873 CXX test/cpp_headers/json.o 00:03:25.873 CXX test/cpp_headers/jsonrpc.o 00:03:25.873 CC app/fio/nvme/fio_plugin.o 00:03:25.873 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:25.873 LINK abort 00:03:25.873 CXX test/cpp_headers/keyring.o 00:03:25.873 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:26.131 LINK fused_ordering 00:03:26.131 CC test/nvme/fdp/fdp.o 00:03:26.131 CC test/nvme/cuse/cuse.o 00:03:26.131 CXX test/cpp_headers/keyring_module.o 00:03:26.131 LINK pmr_persistence 00:03:26.131 CC app/fio/bdev/fio_plugin.o 00:03:26.131 CXX test/cpp_headers/likely.o 00:03:26.131 LINK doorbell_aers 00:03:26.131 CXX test/cpp_headers/log.o 00:03:26.131 CXX test/cpp_headers/lvol.o 00:03:26.388 LINK fdp 00:03:26.388 LINK spdk_nvme 00:03:26.388 CXX test/cpp_headers/md5.o 00:03:26.388 CXX test/cpp_headers/memory.o 00:03:26.388 CXX test/cpp_headers/mmio.o 00:03:26.388 CXX test/cpp_headers/nbd.o 00:03:26.388 CXX test/cpp_headers/net.o 00:03:26.388 CXX test/cpp_headers/notify.o 00:03:26.388 CXX test/cpp_headers/nvme.o 00:03:26.388 CC examples/nvmf/nvmf/nvmf.o 00:03:26.388 CXX test/cpp_headers/nvme_intel.o 00:03:26.388 CXX test/cpp_headers/nvme_ocssd.o 00:03:26.388 LINK spdk_bdev 00:03:26.388 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:26.388 CXX test/cpp_headers/nvme_spec.o 00:03:26.669 CXX test/cpp_headers/nvme_zns.o 00:03:26.669 CXX test/cpp_headers/nvmf_cmd.o 00:03:26.669 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:26.669 CXX test/cpp_headers/nvmf.o 00:03:26.669 CXX test/cpp_headers/nvmf_spec.o 00:03:26.669 CXX test/cpp_headers/nvmf_transport.o 00:03:26.669 CXX test/cpp_headers/opal.o 00:03:26.669 CXX test/cpp_headers/opal_spec.o 00:03:26.669 LINK nvmf 00:03:26.669 LINK esnap 00:03:26.669 CXX test/cpp_headers/pci_ids.o 00:03:26.669 CXX test/cpp_headers/pipe.o 00:03:26.669 CXX test/cpp_headers/queue.o 00:03:26.669 CXX test/cpp_headers/reduce.o 00:03:26.669 CXX test/cpp_headers/rpc.o 00:03:26.927 CXX test/cpp_headers/scheduler.o 00:03:26.927 CXX test/cpp_headers/scsi.o 00:03:26.927 CXX test/cpp_headers/scsi_spec.o 00:03:26.927 CXX test/cpp_headers/sock.o 00:03:26.927 CXX test/cpp_headers/stdinc.o 00:03:26.927 CXX test/cpp_headers/string.o 00:03:26.927 CXX test/cpp_headers/thread.o 00:03:26.927 CXX test/cpp_headers/trace.o 00:03:26.927 CXX test/cpp_headers/trace_parser.o 00:03:26.927 CXX test/cpp_headers/tree.o 00:03:26.927 CXX test/cpp_headers/ublk.o 00:03:26.927 CXX test/cpp_headers/util.o 00:03:26.927 CXX test/cpp_headers/uuid.o 00:03:26.927 CXX test/cpp_headers/version.o 00:03:26.927 CXX test/cpp_headers/vfio_user_pci.o 00:03:26.927 CXX test/cpp_headers/vfio_user_spec.o 00:03:26.927 CXX test/cpp_headers/vhost.o 00:03:27.185 CXX test/cpp_headers/vmd.o 00:03:27.185 CXX test/cpp_headers/xor.o 00:03:27.185 LINK cuse 00:03:27.185 CXX test/cpp_headers/zipf.o 00:03:27.185 00:03:27.185 real 1m1.779s 00:03:27.185 user 6m0.275s 00:03:27.185 sys 1m6.454s 00:03:27.185 19:37:22 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:27.185 19:37:22 make -- common/autotest_common.sh@10 -- $ set +x 00:03:27.185 ************************************ 00:03:27.185 END TEST make 00:03:27.185 ************************************ 00:03:27.185 19:37:22 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:27.185 19:37:22 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:27.185 19:37:22 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:27.185 19:37:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:27.185 19:37:22 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:27.185 19:37:22 -- pm/common@44 -- $ pid=5035 00:03:27.185 19:37:22 -- pm/common@50 -- $ kill -TERM 5035 00:03:27.185 19:37:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:27.185 19:37:22 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:27.185 19:37:22 -- pm/common@44 -- $ pid=5036 00:03:27.185 19:37:22 -- pm/common@50 -- $ kill -TERM 5036 00:03:27.185 19:37:22 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:27.185 19:37:22 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:27.442 19:37:22 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:27.442 19:37:22 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:27.442 19:37:22 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:27.442 19:37:22 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:27.442 19:37:22 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:27.442 19:37:22 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:27.442 19:37:22 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:27.442 19:37:22 -- scripts/common.sh@336 -- # IFS=.-: 00:03:27.442 19:37:22 -- scripts/common.sh@336 -- # read -ra ver1 00:03:27.442 19:37:22 -- scripts/common.sh@337 -- # IFS=.-: 00:03:27.442 19:37:22 -- scripts/common.sh@337 -- # read -ra ver2 00:03:27.442 19:37:22 -- scripts/common.sh@338 -- # local 'op=<' 00:03:27.442 19:37:22 -- scripts/common.sh@340 -- # ver1_l=2 00:03:27.442 19:37:22 -- scripts/common.sh@341 -- # ver2_l=1 00:03:27.442 19:37:22 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:27.442 19:37:22 -- scripts/common.sh@344 -- # case "$op" in 00:03:27.442 19:37:22 -- scripts/common.sh@345 -- # : 1 00:03:27.442 19:37:22 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:27.442 19:37:22 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:27.442 19:37:22 -- scripts/common.sh@365 -- # decimal 1 00:03:27.442 19:37:22 -- scripts/common.sh@353 -- # local d=1 00:03:27.442 19:37:22 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:27.442 19:37:22 -- scripts/common.sh@355 -- # echo 1 00:03:27.442 19:37:22 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:27.442 19:37:22 -- scripts/common.sh@366 -- # decimal 2 00:03:27.443 19:37:22 -- scripts/common.sh@353 -- # local d=2 00:03:27.443 19:37:22 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:27.443 19:37:22 -- scripts/common.sh@355 -- # echo 2 00:03:27.443 19:37:22 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:27.443 19:37:22 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:27.443 19:37:22 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:27.443 19:37:22 -- scripts/common.sh@368 -- # return 0 00:03:27.443 19:37:22 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:27.443 19:37:22 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:27.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:27.443 --rc genhtml_branch_coverage=1 00:03:27.443 --rc genhtml_function_coverage=1 00:03:27.443 --rc genhtml_legend=1 00:03:27.443 --rc geninfo_all_blocks=1 00:03:27.443 --rc geninfo_unexecuted_blocks=1 00:03:27.443 00:03:27.443 ' 00:03:27.443 19:37:22 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:27.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:27.443 --rc genhtml_branch_coverage=1 00:03:27.443 --rc genhtml_function_coverage=1 00:03:27.443 --rc genhtml_legend=1 00:03:27.443 --rc geninfo_all_blocks=1 00:03:27.443 --rc geninfo_unexecuted_blocks=1 00:03:27.443 00:03:27.443 ' 00:03:27.443 19:37:22 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:27.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:27.443 --rc genhtml_branch_coverage=1 00:03:27.443 --rc genhtml_function_coverage=1 00:03:27.443 --rc genhtml_legend=1 00:03:27.443 --rc geninfo_all_blocks=1 00:03:27.443 --rc geninfo_unexecuted_blocks=1 00:03:27.443 00:03:27.443 ' 00:03:27.443 19:37:22 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:27.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:27.443 --rc genhtml_branch_coverage=1 00:03:27.443 --rc genhtml_function_coverage=1 00:03:27.443 --rc genhtml_legend=1 00:03:27.443 --rc geninfo_all_blocks=1 00:03:27.443 --rc geninfo_unexecuted_blocks=1 00:03:27.443 00:03:27.443 ' 00:03:27.443 19:37:22 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:27.443 19:37:22 -- nvmf/common.sh@7 -- # uname -s 00:03:27.443 19:37:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:27.443 19:37:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:27.443 19:37:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:27.443 19:37:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:27.443 19:37:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:27.443 19:37:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:27.443 19:37:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:27.443 19:37:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:27.443 19:37:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:27.443 19:37:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:27.443 19:37:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:03:27.443 19:37:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=91838eb1-5852-43eb-90b2-09876f360ab2 00:03:27.443 19:37:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:27.443 19:37:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:27.443 19:37:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:27.443 19:37:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:27.443 19:37:22 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:27.443 19:37:22 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:27.443 19:37:22 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:27.443 19:37:22 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:27.443 19:37:22 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:27.443 19:37:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:27.443 19:37:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:27.443 19:37:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:27.443 19:37:22 -- paths/export.sh@5 -- # export PATH 00:03:27.443 19:37:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:27.443 19:37:22 -- nvmf/common.sh@51 -- # : 0 00:03:27.443 19:37:22 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:27.443 19:37:22 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:27.443 19:37:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:27.443 19:37:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:27.443 19:37:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:27.443 19:37:22 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:27.443 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:27.443 19:37:22 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:27.443 19:37:22 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:27.443 19:37:22 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:27.443 19:37:22 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:27.443 19:37:22 -- spdk/autotest.sh@32 -- # uname -s 00:03:27.443 19:37:22 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:27.443 19:37:22 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:27.443 19:37:22 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:27.443 19:37:22 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:27.443 19:37:22 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:27.443 19:37:22 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:27.443 19:37:22 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:27.443 19:37:22 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:27.443 19:37:22 -- spdk/autotest.sh@48 -- # udevadm_pid=53775 00:03:27.443 19:37:22 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:27.443 19:37:22 -- pm/common@17 -- # local monitor 00:03:27.443 19:37:22 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:27.443 19:37:22 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:27.443 19:37:22 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:27.443 19:37:22 -- pm/common@25 -- # sleep 1 00:03:27.443 19:37:22 -- pm/common@21 -- # date +%s 00:03:27.443 19:37:22 -- pm/common@21 -- # date +%s 00:03:27.443 19:37:22 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732649842 00:03:27.443 19:37:22 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732649842 00:03:27.443 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732649842_collect-vmstat.pm.log 00:03:27.443 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732649842_collect-cpu-load.pm.log 00:03:28.375 19:37:23 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:28.375 19:37:23 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:28.375 19:37:23 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:28.375 19:37:23 -- common/autotest_common.sh@10 -- # set +x 00:03:28.375 19:37:23 -- spdk/autotest.sh@59 -- # create_test_list 00:03:28.375 19:37:23 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:28.375 19:37:23 -- common/autotest_common.sh@10 -- # set +x 00:03:28.633 19:37:23 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:28.633 19:37:23 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:28.633 19:37:23 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:28.633 19:37:23 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:28.633 19:37:23 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:28.633 19:37:23 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:28.633 19:37:23 -- common/autotest_common.sh@1457 -- # uname 00:03:28.633 19:37:23 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:28.633 19:37:23 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:28.633 19:37:23 -- common/autotest_common.sh@1477 -- # uname 00:03:28.633 19:37:23 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:28.633 19:37:23 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:28.633 19:37:23 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:28.633 lcov: LCOV version 1.15 00:03:28.633 19:37:23 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:43.577 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:43.577 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:58.498 19:37:52 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:58.498 19:37:52 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:58.498 19:37:52 -- common/autotest_common.sh@10 -- # set +x 00:03:58.498 19:37:52 -- spdk/autotest.sh@78 -- # rm -f 00:03:58.498 19:37:52 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:58.498 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:58.498 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:58.498 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:58.498 19:37:53 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:58.498 19:37:53 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:58.498 19:37:53 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:58.498 19:37:53 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:58.498 19:37:53 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:58.498 19:37:53 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:58.498 19:37:53 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:58.498 19:37:53 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:58.498 19:37:53 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:58.498 19:37:53 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:58.498 19:37:53 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:03:58.498 19:37:53 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:03:58.498 19:37:53 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:58.498 19:37:53 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:58.498 19:37:53 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:58.498 19:37:53 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:03:58.498 19:37:53 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:03:58.498 19:37:53 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:58.498 19:37:53 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:58.498 19:37:53 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:58.498 19:37:53 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:03:58.498 19:37:53 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:03:58.499 19:37:53 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:58.499 19:37:53 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:58.499 19:37:53 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:58.499 19:37:53 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:58.499 19:37:53 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:58.499 19:37:53 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:58.499 19:37:53 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:58.499 19:37:53 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:58.499 No valid GPT data, bailing 00:03:58.499 19:37:53 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:58.499 19:37:53 -- scripts/common.sh@394 -- # pt= 00:03:58.499 19:37:53 -- scripts/common.sh@395 -- # return 1 00:03:58.499 19:37:53 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:58.499 1+0 records in 00:03:58.499 1+0 records out 00:03:58.499 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0041293 s, 254 MB/s 00:03:58.499 19:37:53 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:58.499 19:37:53 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:58.499 19:37:53 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:03:58.499 19:37:53 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:03:58.499 19:37:53 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:58.499 No valid GPT data, bailing 00:03:58.499 19:37:53 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:58.499 19:37:53 -- scripts/common.sh@394 -- # pt= 00:03:58.499 19:37:53 -- scripts/common.sh@395 -- # return 1 00:03:58.499 19:37:53 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:58.499 1+0 records in 00:03:58.499 1+0 records out 00:03:58.499 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00500727 s, 209 MB/s 00:03:58.499 19:37:53 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:58.499 19:37:53 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:58.499 19:37:53 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:03:58.499 19:37:53 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:03:58.499 19:37:53 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:58.499 No valid GPT data, bailing 00:03:58.499 19:37:53 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:58.499 19:37:53 -- scripts/common.sh@394 -- # pt= 00:03:58.499 19:37:53 -- scripts/common.sh@395 -- # return 1 00:03:58.499 19:37:53 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:58.499 1+0 records in 00:03:58.499 1+0 records out 00:03:58.499 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0049465 s, 212 MB/s 00:03:58.499 19:37:53 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:58.499 19:37:53 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:58.499 19:37:53 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:03:58.499 19:37:53 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:03:58.499 19:37:53 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:58.499 No valid GPT data, bailing 00:03:58.499 19:37:53 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:58.499 19:37:53 -- scripts/common.sh@394 -- # pt= 00:03:58.499 19:37:53 -- scripts/common.sh@395 -- # return 1 00:03:58.499 19:37:53 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:58.499 1+0 records in 00:03:58.499 1+0 records out 00:03:58.499 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00415763 s, 252 MB/s 00:03:58.499 19:37:53 -- spdk/autotest.sh@105 -- # sync 00:03:58.499 19:37:53 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:58.499 19:37:53 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:58.499 19:37:53 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:59.872 19:37:54 -- spdk/autotest.sh@111 -- # uname -s 00:03:59.872 19:37:54 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:59.872 19:37:54 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:59.872 19:37:54 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:00.443 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:00.443 Hugepages 00:04:00.444 node hugesize free / total 00:04:00.444 node0 1048576kB 0 / 0 00:04:00.444 node0 2048kB 0 / 0 00:04:00.444 00:04:00.444 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:00.444 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:00.444 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:00.444 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:00.444 19:37:55 -- spdk/autotest.sh@117 -- # uname -s 00:04:00.444 19:37:55 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:00.444 19:37:55 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:00.444 19:37:55 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:01.029 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:01.286 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:01.545 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:01.545 19:37:56 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:02.480 19:37:57 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:02.480 19:37:57 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:02.480 19:37:57 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:02.480 19:37:57 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:02.480 19:37:57 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:02.480 19:37:57 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:02.480 19:37:57 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:02.480 19:37:57 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:02.480 19:37:57 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:02.480 19:37:57 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:02.480 19:37:57 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:02.480 19:37:57 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:02.737 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:02.995 Waiting for block devices as requested 00:04:02.995 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:02.995 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:02.995 19:37:58 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:02.995 19:37:58 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:02.995 19:37:58 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:02.995 19:37:58 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:02.995 19:37:58 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:02.995 19:37:58 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:02.995 19:37:58 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:02.995 19:37:58 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:02.995 19:37:58 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:02.995 19:37:58 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:02.995 19:37:58 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:02.995 19:37:58 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:02.995 19:37:58 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:02.995 19:37:58 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:02.995 19:37:58 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:02.995 19:37:58 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:02.995 19:37:58 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:02.995 19:37:58 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:02.995 19:37:58 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:02.995 19:37:58 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:02.995 19:37:58 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:02.995 19:37:58 -- common/autotest_common.sh@1543 -- # continue 00:04:02.995 19:37:58 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:02.995 19:37:58 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:02.995 19:37:58 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:02.995 19:37:58 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:02.995 19:37:58 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:02.995 19:37:58 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:02.995 19:37:58 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:02.995 19:37:58 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:02.995 19:37:58 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:02.995 19:37:58 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:02.995 19:37:58 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:02.995 19:37:58 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:02.995 19:37:58 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:02.995 19:37:58 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:02.995 19:37:58 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:02.995 19:37:58 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:02.995 19:37:58 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:02.995 19:37:58 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:02.995 19:37:58 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:02.995 19:37:58 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:02.995 19:37:58 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:02.995 19:37:58 -- common/autotest_common.sh@1543 -- # continue 00:04:02.995 19:37:58 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:02.995 19:37:58 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:02.995 19:37:58 -- common/autotest_common.sh@10 -- # set +x 00:04:03.252 19:37:58 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:03.252 19:37:58 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:03.252 19:37:58 -- common/autotest_common.sh@10 -- # set +x 00:04:03.252 19:37:58 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:03.511 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:03.830 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:03.830 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:03.830 19:37:58 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:03.830 19:37:58 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:03.830 19:37:58 -- common/autotest_common.sh@10 -- # set +x 00:04:03.830 19:37:58 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:03.830 19:37:58 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:03.830 19:37:58 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:03.830 19:37:58 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:03.830 19:37:58 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:03.830 19:37:58 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:03.830 19:37:58 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:03.830 19:37:58 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:03.830 19:37:58 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:03.830 19:37:58 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:03.830 19:37:58 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:03.830 19:37:58 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:03.830 19:37:58 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:03.830 19:37:58 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:03.830 19:37:58 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:03.830 19:37:59 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:03.830 19:37:59 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:03.830 19:37:59 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:03.830 19:37:59 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:03.830 19:37:59 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:03.830 19:37:59 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:03.830 19:37:59 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:03.830 19:37:59 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:03.830 19:37:59 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:03.830 19:37:59 -- common/autotest_common.sh@1572 -- # return 0 00:04:03.830 19:37:59 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:03.830 19:37:59 -- common/autotest_common.sh@1580 -- # return 0 00:04:03.830 19:37:59 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:03.830 19:37:59 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:03.830 19:37:59 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:03.830 19:37:59 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:03.830 19:37:59 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:03.830 19:37:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:03.830 19:37:59 -- common/autotest_common.sh@10 -- # set +x 00:04:03.830 19:37:59 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:04:03.830 19:37:59 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:04:03.830 19:37:59 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:04:03.830 19:37:59 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:03.830 19:37:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:03.830 19:37:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.830 19:37:59 -- common/autotest_common.sh@10 -- # set +x 00:04:03.830 ************************************ 00:04:03.830 START TEST env 00:04:03.830 ************************************ 00:04:03.830 19:37:59 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:04.087 * Looking for test storage... 00:04:04.088 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:04.088 19:37:59 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:04.088 19:37:59 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:04.088 19:37:59 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:04.088 19:37:59 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:04.088 19:37:59 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:04.088 19:37:59 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:04.088 19:37:59 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:04.088 19:37:59 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:04.088 19:37:59 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:04.088 19:37:59 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:04.088 19:37:59 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:04.088 19:37:59 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:04.088 19:37:59 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:04.088 19:37:59 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:04.088 19:37:59 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:04.088 19:37:59 env -- scripts/common.sh@344 -- # case "$op" in 00:04:04.088 19:37:59 env -- scripts/common.sh@345 -- # : 1 00:04:04.088 19:37:59 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:04.088 19:37:59 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:04.088 19:37:59 env -- scripts/common.sh@365 -- # decimal 1 00:04:04.088 19:37:59 env -- scripts/common.sh@353 -- # local d=1 00:04:04.088 19:37:59 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:04.088 19:37:59 env -- scripts/common.sh@355 -- # echo 1 00:04:04.088 19:37:59 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:04.088 19:37:59 env -- scripts/common.sh@366 -- # decimal 2 00:04:04.088 19:37:59 env -- scripts/common.sh@353 -- # local d=2 00:04:04.088 19:37:59 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:04.088 19:37:59 env -- scripts/common.sh@355 -- # echo 2 00:04:04.088 19:37:59 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:04.088 19:37:59 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:04.088 19:37:59 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:04.088 19:37:59 env -- scripts/common.sh@368 -- # return 0 00:04:04.088 19:37:59 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:04.088 19:37:59 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:04.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.088 --rc genhtml_branch_coverage=1 00:04:04.088 --rc genhtml_function_coverage=1 00:04:04.088 --rc genhtml_legend=1 00:04:04.088 --rc geninfo_all_blocks=1 00:04:04.088 --rc geninfo_unexecuted_blocks=1 00:04:04.088 00:04:04.088 ' 00:04:04.088 19:37:59 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:04.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.088 --rc genhtml_branch_coverage=1 00:04:04.088 --rc genhtml_function_coverage=1 00:04:04.088 --rc genhtml_legend=1 00:04:04.088 --rc geninfo_all_blocks=1 00:04:04.088 --rc geninfo_unexecuted_blocks=1 00:04:04.088 00:04:04.088 ' 00:04:04.088 19:37:59 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:04.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.088 --rc genhtml_branch_coverage=1 00:04:04.088 --rc genhtml_function_coverage=1 00:04:04.088 --rc genhtml_legend=1 00:04:04.088 --rc geninfo_all_blocks=1 00:04:04.088 --rc geninfo_unexecuted_blocks=1 00:04:04.088 00:04:04.088 ' 00:04:04.088 19:37:59 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:04.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.088 --rc genhtml_branch_coverage=1 00:04:04.088 --rc genhtml_function_coverage=1 00:04:04.088 --rc genhtml_legend=1 00:04:04.088 --rc geninfo_all_blocks=1 00:04:04.088 --rc geninfo_unexecuted_blocks=1 00:04:04.088 00:04:04.088 ' 00:04:04.088 19:37:59 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:04.088 19:37:59 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:04.088 19:37:59 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.088 19:37:59 env -- common/autotest_common.sh@10 -- # set +x 00:04:04.088 ************************************ 00:04:04.088 START TEST env_memory 00:04:04.088 ************************************ 00:04:04.088 19:37:59 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:04.088 00:04:04.088 00:04:04.088 CUnit - A unit testing framework for C - Version 2.1-3 00:04:04.088 http://cunit.sourceforge.net/ 00:04:04.088 00:04:04.088 00:04:04.088 Suite: memory 00:04:04.088 Test: alloc and free memory map ...[2024-11-26 19:37:59.216242] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:04.088 passed 00:04:04.088 Test: mem map translation ...[2024-11-26 19:37:59.239866] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:04.088 [2024-11-26 19:37:59.239901] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:04.088 [2024-11-26 19:37:59.239943] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:04.088 [2024-11-26 19:37:59.239949] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:04.088 passed 00:04:04.088 Test: mem map registration ...[2024-11-26 19:37:59.290870] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:04.088 [2024-11-26 19:37:59.290915] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:04.088 passed 00:04:04.346 Test: mem map adjacent registrations ...passed 00:04:04.346 00:04:04.346 Run Summary: Type Total Ran Passed Failed Inactive 00:04:04.346 suites 1 1 n/a 0 0 00:04:04.346 tests 4 4 4 0 0 00:04:04.346 asserts 152 152 152 0 n/a 00:04:04.346 00:04:04.346 Elapsed time = 0.169 seconds 00:04:04.346 00:04:04.346 real 0m0.182s 00:04:04.346 user 0m0.170s 00:04:04.346 sys 0m0.009s 00:04:04.346 19:37:59 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.346 19:37:59 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:04.346 ************************************ 00:04:04.346 END TEST env_memory 00:04:04.346 ************************************ 00:04:04.346 19:37:59 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:04.346 19:37:59 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:04.346 19:37:59 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.346 19:37:59 env -- common/autotest_common.sh@10 -- # set +x 00:04:04.346 ************************************ 00:04:04.346 START TEST env_vtophys 00:04:04.346 ************************************ 00:04:04.346 19:37:59 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:04.346 EAL: lib.eal log level changed from notice to debug 00:04:04.346 EAL: Detected lcore 0 as core 0 on socket 0 00:04:04.346 EAL: Detected lcore 1 as core 0 on socket 0 00:04:04.346 EAL: Detected lcore 2 as core 0 on socket 0 00:04:04.346 EAL: Detected lcore 3 as core 0 on socket 0 00:04:04.346 EAL: Detected lcore 4 as core 0 on socket 0 00:04:04.346 EAL: Detected lcore 5 as core 0 on socket 0 00:04:04.346 EAL: Detected lcore 6 as core 0 on socket 0 00:04:04.346 EAL: Detected lcore 7 as core 0 on socket 0 00:04:04.346 EAL: Detected lcore 8 as core 0 on socket 0 00:04:04.346 EAL: Detected lcore 9 as core 0 on socket 0 00:04:04.346 EAL: Maximum logical cores by configuration: 128 00:04:04.346 EAL: Detected CPU lcores: 10 00:04:04.346 EAL: Detected NUMA nodes: 1 00:04:04.346 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:04.346 EAL: Detected shared linkage of DPDK 00:04:04.346 EAL: No shared files mode enabled, IPC will be disabled 00:04:04.346 EAL: Selected IOVA mode 'PA' 00:04:04.346 EAL: Probing VFIO support... 00:04:04.346 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:04.346 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:04.346 EAL: Ask a virtual area of 0x2e000 bytes 00:04:04.346 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:04.346 EAL: Setting up physically contiguous memory... 00:04:04.346 EAL: Setting maximum number of open files to 524288 00:04:04.346 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:04.346 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:04.346 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.346 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:04.346 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:04.346 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.346 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:04.346 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:04.346 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.346 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:04.346 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:04.346 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.346 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:04.346 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:04.346 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.346 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:04.346 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:04.346 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.346 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:04.346 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:04.346 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.346 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:04.346 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:04.346 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.346 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:04.346 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:04.346 EAL: Hugepages will be freed exactly as allocated. 00:04:04.346 EAL: No shared files mode enabled, IPC is disabled 00:04:04.346 EAL: No shared files mode enabled, IPC is disabled 00:04:04.346 EAL: TSC frequency is ~2600000 KHz 00:04:04.346 EAL: Main lcore 0 is ready (tid=7fdb9a613a00;cpuset=[0]) 00:04:04.346 EAL: Trying to obtain current memory policy. 00:04:04.346 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.346 EAL: Restoring previous memory policy: 0 00:04:04.346 EAL: request: mp_malloc_sync 00:04:04.346 EAL: No shared files mode enabled, IPC is disabled 00:04:04.346 EAL: Heap on socket 0 was expanded by 2MB 00:04:04.346 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:04.346 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:04.346 EAL: Mem event callback 'spdk:(nil)' registered 00:04:04.346 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:04.346 00:04:04.346 00:04:04.346 CUnit - A unit testing framework for C - Version 2.1-3 00:04:04.346 http://cunit.sourceforge.net/ 00:04:04.346 00:04:04.346 00:04:04.346 Suite: components_suite 00:04:04.346 Test: vtophys_malloc_test ...passed 00:04:04.346 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:04.346 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.346 EAL: Restoring previous memory policy: 4 00:04:04.346 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.346 EAL: request: mp_malloc_sync 00:04:04.346 EAL: No shared files mode enabled, IPC is disabled 00:04:04.346 EAL: Heap on socket 0 was expanded by 4MB 00:04:04.346 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.346 EAL: request: mp_malloc_sync 00:04:04.346 EAL: No shared files mode enabled, IPC is disabled 00:04:04.346 EAL: Heap on socket 0 was shrunk by 4MB 00:04:04.346 EAL: Trying to obtain current memory policy. 00:04:04.346 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.346 EAL: Restoring previous memory policy: 4 00:04:04.346 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.346 EAL: request: mp_malloc_sync 00:04:04.346 EAL: No shared files mode enabled, IPC is disabled 00:04:04.346 EAL: Heap on socket 0 was expanded by 6MB 00:04:04.346 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.346 EAL: request: mp_malloc_sync 00:04:04.346 EAL: No shared files mode enabled, IPC is disabled 00:04:04.346 EAL: Heap on socket 0 was shrunk by 6MB 00:04:04.346 EAL: Trying to obtain current memory policy. 00:04:04.346 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.346 EAL: Restoring previous memory policy: 4 00:04:04.346 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.346 EAL: request: mp_malloc_sync 00:04:04.346 EAL: No shared files mode enabled, IPC is disabled 00:04:04.346 EAL: Heap on socket 0 was expanded by 10MB 00:04:04.346 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.346 EAL: request: mp_malloc_sync 00:04:04.346 EAL: No shared files mode enabled, IPC is disabled 00:04:04.346 EAL: Heap on socket 0 was shrunk by 10MB 00:04:04.346 EAL: Trying to obtain current memory policy. 00:04:04.346 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.346 EAL: Restoring previous memory policy: 4 00:04:04.346 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.346 EAL: request: mp_malloc_sync 00:04:04.346 EAL: No shared files mode enabled, IPC is disabled 00:04:04.346 EAL: Heap on socket 0 was expanded by 18MB 00:04:04.346 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.346 EAL: request: mp_malloc_sync 00:04:04.346 EAL: No shared files mode enabled, IPC is disabled 00:04:04.346 EAL: Heap on socket 0 was shrunk by 18MB 00:04:04.346 EAL: Trying to obtain current memory policy. 00:04:04.346 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.346 EAL: Restoring previous memory policy: 4 00:04:04.346 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.346 EAL: request: mp_malloc_sync 00:04:04.346 EAL: No shared files mode enabled, IPC is disabled 00:04:04.346 EAL: Heap on socket 0 was expanded by 34MB 00:04:04.346 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.346 EAL: request: mp_malloc_sync 00:04:04.346 EAL: No shared files mode enabled, IPC is disabled 00:04:04.346 EAL: Heap on socket 0 was shrunk by 34MB 00:04:04.346 EAL: Trying to obtain current memory policy. 00:04:04.346 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.346 EAL: Restoring previous memory policy: 4 00:04:04.346 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.346 EAL: request: mp_malloc_sync 00:04:04.346 EAL: No shared files mode enabled, IPC is disabled 00:04:04.346 EAL: Heap on socket 0 was expanded by 66MB 00:04:04.346 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.346 EAL: request: mp_malloc_sync 00:04:04.346 EAL: No shared files mode enabled, IPC is disabled 00:04:04.346 EAL: Heap on socket 0 was shrunk by 66MB 00:04:04.346 EAL: Trying to obtain current memory policy. 00:04:04.346 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.604 EAL: Restoring previous memory policy: 4 00:04:04.604 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.604 EAL: request: mp_malloc_sync 00:04:04.604 EAL: No shared files mode enabled, IPC is disabled 00:04:04.604 EAL: Heap on socket 0 was expanded by 130MB 00:04:04.604 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.604 EAL: request: mp_malloc_sync 00:04:04.604 EAL: No shared files mode enabled, IPC is disabled 00:04:04.604 EAL: Heap on socket 0 was shrunk by 130MB 00:04:04.604 EAL: Trying to obtain current memory policy. 00:04:04.604 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.604 EAL: Restoring previous memory policy: 4 00:04:04.604 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.604 EAL: request: mp_malloc_sync 00:04:04.604 EAL: No shared files mode enabled, IPC is disabled 00:04:04.604 EAL: Heap on socket 0 was expanded by 258MB 00:04:04.604 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.604 EAL: request: mp_malloc_sync 00:04:04.604 EAL: No shared files mode enabled, IPC is disabled 00:04:04.604 EAL: Heap on socket 0 was shrunk by 258MB 00:04:04.604 EAL: Trying to obtain current memory policy. 00:04:04.605 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.605 EAL: Restoring previous memory policy: 4 00:04:04.605 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.605 EAL: request: mp_malloc_sync 00:04:04.605 EAL: No shared files mode enabled, IPC is disabled 00:04:04.605 EAL: Heap on socket 0 was expanded by 514MB 00:04:04.862 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.862 EAL: request: mp_malloc_sync 00:04:04.862 EAL: No shared files mode enabled, IPC is disabled 00:04:04.862 EAL: Heap on socket 0 was shrunk by 514MB 00:04:04.862 EAL: Trying to obtain current memory policy. 00:04:04.862 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.863 EAL: Restoring previous memory policy: 4 00:04:04.863 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.863 EAL: request: mp_malloc_sync 00:04:04.863 EAL: No shared files mode enabled, IPC is disabled 00:04:04.863 EAL: Heap on socket 0 was expanded by 1026MB 00:04:05.122 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.122 passed 00:04:05.122 00:04:05.122 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.122 suites 1 1 n/a 0 0 00:04:05.122 tests 2 2 2 0 0 00:04:05.122 asserts 5358 5358 5358 0 n/a 00:04:05.122 00:04:05.123 Elapsed time = 0.705 seconds 00:04:05.123 EAL: request: mp_malloc_sync 00:04:05.123 EAL: No shared files mode enabled, IPC is disabled 00:04:05.123 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:05.123 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.123 EAL: request: mp_malloc_sync 00:04:05.123 EAL: No shared files mode enabled, IPC is disabled 00:04:05.123 EAL: Heap on socket 0 was shrunk by 2MB 00:04:05.123 EAL: No shared files mode enabled, IPC is disabled 00:04:05.123 EAL: No shared files mode enabled, IPC is disabled 00:04:05.123 EAL: No shared files mode enabled, IPC is disabled 00:04:05.123 00:04:05.123 real 0m0.893s 00:04:05.123 user 0m0.433s 00:04:05.123 sys 0m0.329s 00:04:05.123 19:38:00 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:05.123 ************************************ 00:04:05.123 END TEST env_vtophys 00:04:05.123 ************************************ 00:04:05.123 19:38:00 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:05.123 19:38:00 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:05.123 19:38:00 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:05.123 19:38:00 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:05.123 19:38:00 env -- common/autotest_common.sh@10 -- # set +x 00:04:05.123 ************************************ 00:04:05.123 START TEST env_pci 00:04:05.123 ************************************ 00:04:05.123 19:38:00 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:05.123 00:04:05.123 00:04:05.123 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.123 http://cunit.sourceforge.net/ 00:04:05.123 00:04:05.123 00:04:05.123 Suite: pci 00:04:05.123 Test: pci_hook ...[2024-11-26 19:38:00.366161] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 55967 has claimed it 00:04:05.385 passed 00:04:05.385 00:04:05.385 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.385 suites 1 1 n/a 0 0 00:04:05.385 tests 1 1 1 0 0 00:04:05.385 asserts 25 25 25 0 n/a 00:04:05.385 00:04:05.385 Elapsed time = 0.001 seconds 00:04:05.385 EAL: Cannot find device (10000:00:01.0) 00:04:05.385 EAL: Failed to attach device on primary process 00:04:05.385 00:04:05.385 real 0m0.017s 00:04:05.385 user 0m0.006s 00:04:05.385 sys 0m0.009s 00:04:05.385 19:38:00 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:05.385 ************************************ 00:04:05.385 END TEST env_pci 00:04:05.385 ************************************ 00:04:05.385 19:38:00 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:05.385 19:38:00 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:05.385 19:38:00 env -- env/env.sh@15 -- # uname 00:04:05.385 19:38:00 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:05.385 19:38:00 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:05.385 19:38:00 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:05.385 19:38:00 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:05.385 19:38:00 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:05.385 19:38:00 env -- common/autotest_common.sh@10 -- # set +x 00:04:05.385 ************************************ 00:04:05.385 START TEST env_dpdk_post_init 00:04:05.385 ************************************ 00:04:05.385 19:38:00 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:05.385 EAL: Detected CPU lcores: 10 00:04:05.385 EAL: Detected NUMA nodes: 1 00:04:05.385 EAL: Detected shared linkage of DPDK 00:04:05.385 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:05.385 EAL: Selected IOVA mode 'PA' 00:04:05.385 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:05.385 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:05.385 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:05.385 Starting DPDK initialization... 00:04:05.385 Starting SPDK post initialization... 00:04:05.385 SPDK NVMe probe 00:04:05.385 Attaching to 0000:00:10.0 00:04:05.385 Attaching to 0000:00:11.0 00:04:05.385 Attached to 0000:00:10.0 00:04:05.385 Attached to 0000:00:11.0 00:04:05.385 Cleaning up... 00:04:05.385 00:04:05.385 real 0m0.175s 00:04:05.385 user 0m0.049s 00:04:05.385 sys 0m0.027s 00:04:05.385 19:38:00 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:05.385 19:38:00 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:05.385 ************************************ 00:04:05.385 END TEST env_dpdk_post_init 00:04:05.385 ************************************ 00:04:05.647 19:38:00 env -- env/env.sh@26 -- # uname 00:04:05.647 19:38:00 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:05.647 19:38:00 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:05.647 19:38:00 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:05.647 19:38:00 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:05.647 19:38:00 env -- common/autotest_common.sh@10 -- # set +x 00:04:05.647 ************************************ 00:04:05.647 START TEST env_mem_callbacks 00:04:05.647 ************************************ 00:04:05.647 19:38:00 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:05.647 EAL: Detected CPU lcores: 10 00:04:05.647 EAL: Detected NUMA nodes: 1 00:04:05.647 EAL: Detected shared linkage of DPDK 00:04:05.647 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:05.647 EAL: Selected IOVA mode 'PA' 00:04:05.647 00:04:05.647 00:04:05.647 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.647 http://cunit.sourceforge.net/ 00:04:05.647 00:04:05.647 00:04:05.647 Suite: memory 00:04:05.647 Test: test ... 00:04:05.647 register 0x200000200000 2097152 00:04:05.647 malloc 3145728 00:04:05.647 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:05.647 register 0x200000400000 4194304 00:04:05.647 buf 0x200000500000 len 3145728 PASSED 00:04:05.647 malloc 64 00:04:05.647 buf 0x2000004fff40 len 64 PASSED 00:04:05.647 malloc 4194304 00:04:05.647 register 0x200000800000 6291456 00:04:05.647 buf 0x200000a00000 len 4194304 PASSED 00:04:05.647 free 0x200000500000 3145728 00:04:05.647 free 0x2000004fff40 64 00:04:05.647 unregister 0x200000400000 4194304 PASSED 00:04:05.647 free 0x200000a00000 4194304 00:04:05.647 unregister 0x200000800000 6291456 PASSED 00:04:05.647 malloc 8388608 00:04:05.647 register 0x200000400000 10485760 00:04:05.647 buf 0x200000600000 len 8388608 PASSED 00:04:05.647 free 0x200000600000 8388608 00:04:05.647 unregister 0x200000400000 10485760 PASSED 00:04:05.647 passed 00:04:05.647 00:04:05.647 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.647 suites 1 1 n/a 0 0 00:04:05.647 tests 1 1 1 0 0 00:04:05.647 asserts 15 15 15 0 n/a 00:04:05.647 00:04:05.647 Elapsed time = 0.007 seconds 00:04:05.647 00:04:05.647 real 0m0.131s 00:04:05.647 user 0m0.013s 00:04:05.647 sys 0m0.017s 00:04:05.647 19:38:00 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:05.647 ************************************ 00:04:05.647 END TEST env_mem_callbacks 00:04:05.647 ************************************ 00:04:05.647 19:38:00 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:05.647 00:04:05.647 real 0m1.828s 00:04:05.647 user 0m0.835s 00:04:05.647 sys 0m0.588s 00:04:05.647 19:38:00 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:05.647 ************************************ 00:04:05.647 END TEST env 00:04:05.647 ************************************ 00:04:05.647 19:38:00 env -- common/autotest_common.sh@10 -- # set +x 00:04:05.647 19:38:00 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:05.647 19:38:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:05.647 19:38:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:05.647 19:38:00 -- common/autotest_common.sh@10 -- # set +x 00:04:05.906 ************************************ 00:04:05.906 START TEST rpc 00:04:05.906 ************************************ 00:04:05.906 19:38:00 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:05.906 * Looking for test storage... 00:04:05.906 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:05.906 19:38:00 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:05.906 19:38:00 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:05.906 19:38:00 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:05.906 19:38:01 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:05.906 19:38:01 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:05.906 19:38:01 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:05.906 19:38:01 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:05.906 19:38:01 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:05.906 19:38:01 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:05.906 19:38:01 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:05.906 19:38:01 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:05.906 19:38:01 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:05.906 19:38:01 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:05.906 19:38:01 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:05.906 19:38:01 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:05.906 19:38:01 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:05.906 19:38:01 rpc -- scripts/common.sh@345 -- # : 1 00:04:05.906 19:38:01 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:05.906 19:38:01 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:05.906 19:38:01 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:05.906 19:38:01 rpc -- scripts/common.sh@353 -- # local d=1 00:04:05.906 19:38:01 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:05.906 19:38:01 rpc -- scripts/common.sh@355 -- # echo 1 00:04:05.906 19:38:01 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:05.906 19:38:01 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:05.906 19:38:01 rpc -- scripts/common.sh@353 -- # local d=2 00:04:05.906 19:38:01 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:05.906 19:38:01 rpc -- scripts/common.sh@355 -- # echo 2 00:04:05.906 19:38:01 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:05.906 19:38:01 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:05.906 19:38:01 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:05.906 19:38:01 rpc -- scripts/common.sh@368 -- # return 0 00:04:05.906 19:38:01 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:05.906 19:38:01 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:05.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.906 --rc genhtml_branch_coverage=1 00:04:05.906 --rc genhtml_function_coverage=1 00:04:05.906 --rc genhtml_legend=1 00:04:05.906 --rc geninfo_all_blocks=1 00:04:05.906 --rc geninfo_unexecuted_blocks=1 00:04:05.906 00:04:05.906 ' 00:04:05.906 19:38:01 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:05.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.906 --rc genhtml_branch_coverage=1 00:04:05.906 --rc genhtml_function_coverage=1 00:04:05.906 --rc genhtml_legend=1 00:04:05.906 --rc geninfo_all_blocks=1 00:04:05.906 --rc geninfo_unexecuted_blocks=1 00:04:05.906 00:04:05.906 ' 00:04:05.906 19:38:01 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:05.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.906 --rc genhtml_branch_coverage=1 00:04:05.906 --rc genhtml_function_coverage=1 00:04:05.906 --rc genhtml_legend=1 00:04:05.906 --rc geninfo_all_blocks=1 00:04:05.906 --rc geninfo_unexecuted_blocks=1 00:04:05.906 00:04:05.906 ' 00:04:05.906 19:38:01 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:05.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.906 --rc genhtml_branch_coverage=1 00:04:05.906 --rc genhtml_function_coverage=1 00:04:05.906 --rc genhtml_legend=1 00:04:05.906 --rc geninfo_all_blocks=1 00:04:05.906 --rc geninfo_unexecuted_blocks=1 00:04:05.906 00:04:05.906 ' 00:04:05.906 19:38:01 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56084 00:04:05.906 19:38:01 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:05.906 19:38:01 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56084 00:04:05.906 19:38:01 rpc -- common/autotest_common.sh@835 -- # '[' -z 56084 ']' 00:04:05.906 19:38:01 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:05.906 19:38:01 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:05.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:05.906 19:38:01 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:05.906 19:38:01 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:05.906 19:38:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.907 19:38:01 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:05.907 [2024-11-26 19:38:01.083170] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:04:05.907 [2024-11-26 19:38:01.083239] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56084 ] 00:04:06.164 [2024-11-26 19:38:01.223153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.164 [2024-11-26 19:38:01.259170] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:06.164 [2024-11-26 19:38:01.259221] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56084' to capture a snapshot of events at runtime. 00:04:06.164 [2024-11-26 19:38:01.259228] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:06.164 [2024-11-26 19:38:01.259233] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:06.164 [2024-11-26 19:38:01.259237] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56084 for offline analysis/debug. 00:04:06.164 [2024-11-26 19:38:01.259504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.164 [2024-11-26 19:38:01.304915] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:06.732 19:38:01 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:06.732 19:38:01 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:06.732 19:38:01 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:06.732 19:38:01 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:06.732 19:38:01 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:06.732 19:38:01 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:06.732 19:38:01 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:06.732 19:38:01 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.732 19:38:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.732 ************************************ 00:04:06.732 START TEST rpc_integrity 00:04:06.732 ************************************ 00:04:06.732 19:38:01 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:06.732 19:38:01 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:06.732 19:38:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.732 19:38:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.732 19:38:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.732 19:38:01 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:06.732 19:38:01 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:06.993 19:38:02 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:06.993 19:38:02 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:06.993 19:38:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.993 19:38:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.993 19:38:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.993 19:38:02 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:06.993 19:38:02 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:06.993 19:38:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.993 19:38:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.993 19:38:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.993 19:38:02 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:06.993 { 00:04:06.993 "name": "Malloc0", 00:04:06.993 "aliases": [ 00:04:06.993 "5b3f7276-a27a-4d0d-9c0b-28022edf51d9" 00:04:06.993 ], 00:04:06.993 "product_name": "Malloc disk", 00:04:06.993 "block_size": 512, 00:04:06.993 "num_blocks": 16384, 00:04:06.993 "uuid": "5b3f7276-a27a-4d0d-9c0b-28022edf51d9", 00:04:06.993 "assigned_rate_limits": { 00:04:06.993 "rw_ios_per_sec": 0, 00:04:06.993 "rw_mbytes_per_sec": 0, 00:04:06.993 "r_mbytes_per_sec": 0, 00:04:06.993 "w_mbytes_per_sec": 0 00:04:06.993 }, 00:04:06.993 "claimed": false, 00:04:06.993 "zoned": false, 00:04:06.993 "supported_io_types": { 00:04:06.993 "read": true, 00:04:06.993 "write": true, 00:04:06.993 "unmap": true, 00:04:06.993 "flush": true, 00:04:06.993 "reset": true, 00:04:06.993 "nvme_admin": false, 00:04:06.993 "nvme_io": false, 00:04:06.993 "nvme_io_md": false, 00:04:06.993 "write_zeroes": true, 00:04:06.993 "zcopy": true, 00:04:06.993 "get_zone_info": false, 00:04:06.993 "zone_management": false, 00:04:06.993 "zone_append": false, 00:04:06.993 "compare": false, 00:04:06.993 "compare_and_write": false, 00:04:06.993 "abort": true, 00:04:06.993 "seek_hole": false, 00:04:06.993 "seek_data": false, 00:04:06.993 "copy": true, 00:04:06.993 "nvme_iov_md": false 00:04:06.993 }, 00:04:06.993 "memory_domains": [ 00:04:06.993 { 00:04:06.993 "dma_device_id": "system", 00:04:06.993 "dma_device_type": 1 00:04:06.993 }, 00:04:06.993 { 00:04:06.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:06.993 "dma_device_type": 2 00:04:06.993 } 00:04:06.993 ], 00:04:06.993 "driver_specific": {} 00:04:06.993 } 00:04:06.993 ]' 00:04:06.993 19:38:02 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:06.993 19:38:02 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:06.993 19:38:02 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:06.993 19:38:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.993 19:38:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.993 [2024-11-26 19:38:02.068903] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:06.993 [2024-11-26 19:38:02.068949] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:06.993 [2024-11-26 19:38:02.068961] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1062050 00:04:06.993 [2024-11-26 19:38:02.068968] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:06.993 [2024-11-26 19:38:02.070362] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:06.993 [2024-11-26 19:38:02.070390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:06.993 Passthru0 00:04:06.993 19:38:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.993 19:38:02 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:06.993 19:38:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.993 19:38:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.993 19:38:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.993 19:38:02 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:06.993 { 00:04:06.993 "name": "Malloc0", 00:04:06.993 "aliases": [ 00:04:06.993 "5b3f7276-a27a-4d0d-9c0b-28022edf51d9" 00:04:06.993 ], 00:04:06.993 "product_name": "Malloc disk", 00:04:06.993 "block_size": 512, 00:04:06.993 "num_blocks": 16384, 00:04:06.993 "uuid": "5b3f7276-a27a-4d0d-9c0b-28022edf51d9", 00:04:06.993 "assigned_rate_limits": { 00:04:06.993 "rw_ios_per_sec": 0, 00:04:06.993 "rw_mbytes_per_sec": 0, 00:04:06.993 "r_mbytes_per_sec": 0, 00:04:06.993 "w_mbytes_per_sec": 0 00:04:06.993 }, 00:04:06.993 "claimed": true, 00:04:06.993 "claim_type": "exclusive_write", 00:04:06.993 "zoned": false, 00:04:06.993 "supported_io_types": { 00:04:06.993 "read": true, 00:04:06.993 "write": true, 00:04:06.993 "unmap": true, 00:04:06.993 "flush": true, 00:04:06.993 "reset": true, 00:04:06.993 "nvme_admin": false, 00:04:06.993 "nvme_io": false, 00:04:06.993 "nvme_io_md": false, 00:04:06.993 "write_zeroes": true, 00:04:06.993 "zcopy": true, 00:04:06.993 "get_zone_info": false, 00:04:06.993 "zone_management": false, 00:04:06.993 "zone_append": false, 00:04:06.993 "compare": false, 00:04:06.993 "compare_and_write": false, 00:04:06.993 "abort": true, 00:04:06.993 "seek_hole": false, 00:04:06.993 "seek_data": false, 00:04:06.993 "copy": true, 00:04:06.993 "nvme_iov_md": false 00:04:06.993 }, 00:04:06.993 "memory_domains": [ 00:04:06.993 { 00:04:06.993 "dma_device_id": "system", 00:04:06.993 "dma_device_type": 1 00:04:06.993 }, 00:04:06.993 { 00:04:06.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:06.993 "dma_device_type": 2 00:04:06.993 } 00:04:06.993 ], 00:04:06.993 "driver_specific": {} 00:04:06.993 }, 00:04:06.993 { 00:04:06.993 "name": "Passthru0", 00:04:06.993 "aliases": [ 00:04:06.993 "0c7bee2f-7431-5657-8cc6-a0c248961db2" 00:04:06.993 ], 00:04:06.993 "product_name": "passthru", 00:04:06.993 "block_size": 512, 00:04:06.993 "num_blocks": 16384, 00:04:06.993 "uuid": "0c7bee2f-7431-5657-8cc6-a0c248961db2", 00:04:06.994 "assigned_rate_limits": { 00:04:06.994 "rw_ios_per_sec": 0, 00:04:06.994 "rw_mbytes_per_sec": 0, 00:04:06.994 "r_mbytes_per_sec": 0, 00:04:06.994 "w_mbytes_per_sec": 0 00:04:06.994 }, 00:04:06.994 "claimed": false, 00:04:06.994 "zoned": false, 00:04:06.994 "supported_io_types": { 00:04:06.994 "read": true, 00:04:06.994 "write": true, 00:04:06.994 "unmap": true, 00:04:06.994 "flush": true, 00:04:06.994 "reset": true, 00:04:06.994 "nvme_admin": false, 00:04:06.994 "nvme_io": false, 00:04:06.994 "nvme_io_md": false, 00:04:06.994 "write_zeroes": true, 00:04:06.994 "zcopy": true, 00:04:06.994 "get_zone_info": false, 00:04:06.994 "zone_management": false, 00:04:06.994 "zone_append": false, 00:04:06.994 "compare": false, 00:04:06.994 "compare_and_write": false, 00:04:06.994 "abort": true, 00:04:06.994 "seek_hole": false, 00:04:06.994 "seek_data": false, 00:04:06.994 "copy": true, 00:04:06.994 "nvme_iov_md": false 00:04:06.994 }, 00:04:06.994 "memory_domains": [ 00:04:06.994 { 00:04:06.994 "dma_device_id": "system", 00:04:06.994 "dma_device_type": 1 00:04:06.994 }, 00:04:06.994 { 00:04:06.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:06.994 "dma_device_type": 2 00:04:06.994 } 00:04:06.994 ], 00:04:06.994 "driver_specific": { 00:04:06.994 "passthru": { 00:04:06.994 "name": "Passthru0", 00:04:06.994 "base_bdev_name": "Malloc0" 00:04:06.994 } 00:04:06.994 } 00:04:06.994 } 00:04:06.994 ]' 00:04:06.994 19:38:02 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:06.994 19:38:02 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:06.994 19:38:02 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:06.994 19:38:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.994 19:38:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.994 19:38:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.994 19:38:02 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:06.994 19:38:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.994 19:38:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.994 19:38:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.994 19:38:02 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:06.994 19:38:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.994 19:38:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.994 19:38:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.994 19:38:02 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:06.994 19:38:02 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:06.994 19:38:02 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:06.994 00:04:06.994 real 0m0.239s 00:04:06.994 user 0m0.133s 00:04:06.994 sys 0m0.036s 00:04:06.994 19:38:02 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:06.994 ************************************ 00:04:06.994 END TEST rpc_integrity 00:04:06.994 ************************************ 00:04:06.994 19:38:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.253 19:38:02 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:07.253 19:38:02 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:07.253 19:38:02 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.253 19:38:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.253 ************************************ 00:04:07.253 START TEST rpc_plugins 00:04:07.253 ************************************ 00:04:07.253 19:38:02 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:07.253 19:38:02 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:07.253 19:38:02 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.253 19:38:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:07.253 19:38:02 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.253 19:38:02 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:07.253 19:38:02 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:07.253 19:38:02 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.253 19:38:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:07.253 19:38:02 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.253 19:38:02 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:07.253 { 00:04:07.253 "name": "Malloc1", 00:04:07.253 "aliases": [ 00:04:07.253 "cae262d8-9749-4869-b379-2264757b9853" 00:04:07.253 ], 00:04:07.253 "product_name": "Malloc disk", 00:04:07.253 "block_size": 4096, 00:04:07.253 "num_blocks": 256, 00:04:07.253 "uuid": "cae262d8-9749-4869-b379-2264757b9853", 00:04:07.253 "assigned_rate_limits": { 00:04:07.253 "rw_ios_per_sec": 0, 00:04:07.253 "rw_mbytes_per_sec": 0, 00:04:07.253 "r_mbytes_per_sec": 0, 00:04:07.253 "w_mbytes_per_sec": 0 00:04:07.253 }, 00:04:07.253 "claimed": false, 00:04:07.253 "zoned": false, 00:04:07.253 "supported_io_types": { 00:04:07.253 "read": true, 00:04:07.253 "write": true, 00:04:07.253 "unmap": true, 00:04:07.253 "flush": true, 00:04:07.253 "reset": true, 00:04:07.253 "nvme_admin": false, 00:04:07.253 "nvme_io": false, 00:04:07.253 "nvme_io_md": false, 00:04:07.253 "write_zeroes": true, 00:04:07.253 "zcopy": true, 00:04:07.253 "get_zone_info": false, 00:04:07.253 "zone_management": false, 00:04:07.253 "zone_append": false, 00:04:07.253 "compare": false, 00:04:07.253 "compare_and_write": false, 00:04:07.253 "abort": true, 00:04:07.253 "seek_hole": false, 00:04:07.253 "seek_data": false, 00:04:07.253 "copy": true, 00:04:07.253 "nvme_iov_md": false 00:04:07.253 }, 00:04:07.253 "memory_domains": [ 00:04:07.253 { 00:04:07.253 "dma_device_id": "system", 00:04:07.253 "dma_device_type": 1 00:04:07.253 }, 00:04:07.253 { 00:04:07.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.253 "dma_device_type": 2 00:04:07.253 } 00:04:07.253 ], 00:04:07.253 "driver_specific": {} 00:04:07.253 } 00:04:07.253 ]' 00:04:07.253 19:38:02 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:07.253 19:38:02 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:07.253 19:38:02 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:07.253 19:38:02 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.253 19:38:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:07.253 19:38:02 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.253 19:38:02 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:07.253 19:38:02 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.253 19:38:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:07.253 19:38:02 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.253 19:38:02 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:07.253 19:38:02 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:07.253 19:38:02 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:07.253 00:04:07.253 real 0m0.114s 00:04:07.253 user 0m0.064s 00:04:07.253 sys 0m0.013s 00:04:07.253 19:38:02 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:07.253 ************************************ 00:04:07.253 END TEST rpc_plugins 00:04:07.253 ************************************ 00:04:07.253 19:38:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:07.253 19:38:02 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:07.253 19:38:02 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:07.253 19:38:02 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.253 19:38:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.253 ************************************ 00:04:07.253 START TEST rpc_trace_cmd_test 00:04:07.253 ************************************ 00:04:07.253 19:38:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:07.253 19:38:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:07.253 19:38:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:07.253 19:38:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.253 19:38:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:07.253 19:38:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.253 19:38:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:07.253 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56084", 00:04:07.253 "tpoint_group_mask": "0x8", 00:04:07.253 "iscsi_conn": { 00:04:07.253 "mask": "0x2", 00:04:07.253 "tpoint_mask": "0x0" 00:04:07.253 }, 00:04:07.253 "scsi": { 00:04:07.253 "mask": "0x4", 00:04:07.253 "tpoint_mask": "0x0" 00:04:07.253 }, 00:04:07.253 "bdev": { 00:04:07.253 "mask": "0x8", 00:04:07.253 "tpoint_mask": "0xffffffffffffffff" 00:04:07.253 }, 00:04:07.253 "nvmf_rdma": { 00:04:07.253 "mask": "0x10", 00:04:07.253 "tpoint_mask": "0x0" 00:04:07.253 }, 00:04:07.253 "nvmf_tcp": { 00:04:07.253 "mask": "0x20", 00:04:07.253 "tpoint_mask": "0x0" 00:04:07.253 }, 00:04:07.253 "ftl": { 00:04:07.253 "mask": "0x40", 00:04:07.253 "tpoint_mask": "0x0" 00:04:07.253 }, 00:04:07.253 "blobfs": { 00:04:07.253 "mask": "0x80", 00:04:07.253 "tpoint_mask": "0x0" 00:04:07.253 }, 00:04:07.253 "dsa": { 00:04:07.253 "mask": "0x200", 00:04:07.253 "tpoint_mask": "0x0" 00:04:07.253 }, 00:04:07.253 "thread": { 00:04:07.253 "mask": "0x400", 00:04:07.253 "tpoint_mask": "0x0" 00:04:07.253 }, 00:04:07.253 "nvme_pcie": { 00:04:07.253 "mask": "0x800", 00:04:07.253 "tpoint_mask": "0x0" 00:04:07.253 }, 00:04:07.253 "iaa": { 00:04:07.253 "mask": "0x1000", 00:04:07.253 "tpoint_mask": "0x0" 00:04:07.253 }, 00:04:07.253 "nvme_tcp": { 00:04:07.253 "mask": "0x2000", 00:04:07.253 "tpoint_mask": "0x0" 00:04:07.253 }, 00:04:07.253 "bdev_nvme": { 00:04:07.253 "mask": "0x4000", 00:04:07.253 "tpoint_mask": "0x0" 00:04:07.253 }, 00:04:07.253 "sock": { 00:04:07.253 "mask": "0x8000", 00:04:07.253 "tpoint_mask": "0x0" 00:04:07.253 }, 00:04:07.253 "blob": { 00:04:07.253 "mask": "0x10000", 00:04:07.253 "tpoint_mask": "0x0" 00:04:07.253 }, 00:04:07.253 "bdev_raid": { 00:04:07.253 "mask": "0x20000", 00:04:07.253 "tpoint_mask": "0x0" 00:04:07.253 }, 00:04:07.253 "scheduler": { 00:04:07.253 "mask": "0x40000", 00:04:07.253 "tpoint_mask": "0x0" 00:04:07.253 } 00:04:07.253 }' 00:04:07.253 19:38:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:07.253 19:38:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:07.253 19:38:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:07.512 19:38:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:07.513 19:38:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:07.513 19:38:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:07.513 19:38:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:07.513 19:38:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:07.513 19:38:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:07.513 ************************************ 00:04:07.513 END TEST rpc_trace_cmd_test 00:04:07.513 ************************************ 00:04:07.513 19:38:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:07.513 00:04:07.513 real 0m0.171s 00:04:07.513 user 0m0.142s 00:04:07.513 sys 0m0.019s 00:04:07.513 19:38:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:07.513 19:38:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:07.513 19:38:02 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:07.513 19:38:02 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:07.513 19:38:02 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:07.513 19:38:02 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:07.513 19:38:02 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.513 19:38:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.513 ************************************ 00:04:07.513 START TEST rpc_daemon_integrity 00:04:07.513 ************************************ 00:04:07.513 19:38:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:07.513 19:38:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:07.513 19:38:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.513 19:38:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.513 19:38:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.513 19:38:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:07.513 19:38:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:07.513 19:38:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:07.513 19:38:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:07.513 19:38:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.513 19:38:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.513 19:38:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.513 19:38:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:07.513 19:38:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:07.513 19:38:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.513 19:38:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.513 19:38:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.513 19:38:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:07.513 { 00:04:07.513 "name": "Malloc2", 00:04:07.513 "aliases": [ 00:04:07.513 "3938d0a3-95e4-40a3-a1ae-8d7d6e30be94" 00:04:07.513 ], 00:04:07.513 "product_name": "Malloc disk", 00:04:07.513 "block_size": 512, 00:04:07.513 "num_blocks": 16384, 00:04:07.513 "uuid": "3938d0a3-95e4-40a3-a1ae-8d7d6e30be94", 00:04:07.513 "assigned_rate_limits": { 00:04:07.513 "rw_ios_per_sec": 0, 00:04:07.513 "rw_mbytes_per_sec": 0, 00:04:07.513 "r_mbytes_per_sec": 0, 00:04:07.513 "w_mbytes_per_sec": 0 00:04:07.513 }, 00:04:07.513 "claimed": false, 00:04:07.513 "zoned": false, 00:04:07.513 "supported_io_types": { 00:04:07.513 "read": true, 00:04:07.513 "write": true, 00:04:07.513 "unmap": true, 00:04:07.513 "flush": true, 00:04:07.513 "reset": true, 00:04:07.513 "nvme_admin": false, 00:04:07.513 "nvme_io": false, 00:04:07.513 "nvme_io_md": false, 00:04:07.513 "write_zeroes": true, 00:04:07.513 "zcopy": true, 00:04:07.513 "get_zone_info": false, 00:04:07.513 "zone_management": false, 00:04:07.513 "zone_append": false, 00:04:07.513 "compare": false, 00:04:07.513 "compare_and_write": false, 00:04:07.513 "abort": true, 00:04:07.513 "seek_hole": false, 00:04:07.513 "seek_data": false, 00:04:07.513 "copy": true, 00:04:07.513 "nvme_iov_md": false 00:04:07.513 }, 00:04:07.513 "memory_domains": [ 00:04:07.513 { 00:04:07.513 "dma_device_id": "system", 00:04:07.513 "dma_device_type": 1 00:04:07.513 }, 00:04:07.513 { 00:04:07.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.513 "dma_device_type": 2 00:04:07.513 } 00:04:07.513 ], 00:04:07.513 "driver_specific": {} 00:04:07.513 } 00:04:07.513 ]' 00:04:07.513 19:38:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:07.773 19:38:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:07.773 19:38:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:07.773 19:38:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.773 19:38:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.773 [2024-11-26 19:38:02.785178] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:07.773 [2024-11-26 19:38:02.785229] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:07.773 [2024-11-26 19:38:02.785242] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x106d030 00:04:07.773 [2024-11-26 19:38:02.785248] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:07.773 [2024-11-26 19:38:02.786624] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:07.773 [2024-11-26 19:38:02.786653] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:07.773 Passthru0 00:04:07.773 19:38:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.773 19:38:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:07.773 19:38:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.773 19:38:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.773 19:38:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.773 19:38:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:07.773 { 00:04:07.773 "name": "Malloc2", 00:04:07.773 "aliases": [ 00:04:07.773 "3938d0a3-95e4-40a3-a1ae-8d7d6e30be94" 00:04:07.773 ], 00:04:07.773 "product_name": "Malloc disk", 00:04:07.773 "block_size": 512, 00:04:07.773 "num_blocks": 16384, 00:04:07.773 "uuid": "3938d0a3-95e4-40a3-a1ae-8d7d6e30be94", 00:04:07.773 "assigned_rate_limits": { 00:04:07.773 "rw_ios_per_sec": 0, 00:04:07.773 "rw_mbytes_per_sec": 0, 00:04:07.773 "r_mbytes_per_sec": 0, 00:04:07.773 "w_mbytes_per_sec": 0 00:04:07.773 }, 00:04:07.773 "claimed": true, 00:04:07.773 "claim_type": "exclusive_write", 00:04:07.773 "zoned": false, 00:04:07.773 "supported_io_types": { 00:04:07.773 "read": true, 00:04:07.773 "write": true, 00:04:07.773 "unmap": true, 00:04:07.773 "flush": true, 00:04:07.773 "reset": true, 00:04:07.773 "nvme_admin": false, 00:04:07.773 "nvme_io": false, 00:04:07.773 "nvme_io_md": false, 00:04:07.773 "write_zeroes": true, 00:04:07.773 "zcopy": true, 00:04:07.773 "get_zone_info": false, 00:04:07.773 "zone_management": false, 00:04:07.773 "zone_append": false, 00:04:07.773 "compare": false, 00:04:07.773 "compare_and_write": false, 00:04:07.773 "abort": true, 00:04:07.773 "seek_hole": false, 00:04:07.773 "seek_data": false, 00:04:07.773 "copy": true, 00:04:07.773 "nvme_iov_md": false 00:04:07.773 }, 00:04:07.773 "memory_domains": [ 00:04:07.773 { 00:04:07.773 "dma_device_id": "system", 00:04:07.773 "dma_device_type": 1 00:04:07.773 }, 00:04:07.773 { 00:04:07.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.773 "dma_device_type": 2 00:04:07.773 } 00:04:07.773 ], 00:04:07.773 "driver_specific": {} 00:04:07.773 }, 00:04:07.773 { 00:04:07.773 "name": "Passthru0", 00:04:07.773 "aliases": [ 00:04:07.773 "3971727a-4c44-5878-840e-e4a7d18a8727" 00:04:07.773 ], 00:04:07.773 "product_name": "passthru", 00:04:07.773 "block_size": 512, 00:04:07.773 "num_blocks": 16384, 00:04:07.773 "uuid": "3971727a-4c44-5878-840e-e4a7d18a8727", 00:04:07.773 "assigned_rate_limits": { 00:04:07.773 "rw_ios_per_sec": 0, 00:04:07.773 "rw_mbytes_per_sec": 0, 00:04:07.773 "r_mbytes_per_sec": 0, 00:04:07.773 "w_mbytes_per_sec": 0 00:04:07.773 }, 00:04:07.773 "claimed": false, 00:04:07.773 "zoned": false, 00:04:07.773 "supported_io_types": { 00:04:07.773 "read": true, 00:04:07.773 "write": true, 00:04:07.773 "unmap": true, 00:04:07.773 "flush": true, 00:04:07.773 "reset": true, 00:04:07.773 "nvme_admin": false, 00:04:07.773 "nvme_io": false, 00:04:07.773 "nvme_io_md": false, 00:04:07.773 "write_zeroes": true, 00:04:07.773 "zcopy": true, 00:04:07.773 "get_zone_info": false, 00:04:07.773 "zone_management": false, 00:04:07.773 "zone_append": false, 00:04:07.773 "compare": false, 00:04:07.773 "compare_and_write": false, 00:04:07.773 "abort": true, 00:04:07.773 "seek_hole": false, 00:04:07.773 "seek_data": false, 00:04:07.773 "copy": true, 00:04:07.773 "nvme_iov_md": false 00:04:07.773 }, 00:04:07.773 "memory_domains": [ 00:04:07.773 { 00:04:07.774 "dma_device_id": "system", 00:04:07.774 "dma_device_type": 1 00:04:07.774 }, 00:04:07.774 { 00:04:07.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.774 "dma_device_type": 2 00:04:07.774 } 00:04:07.774 ], 00:04:07.774 "driver_specific": { 00:04:07.774 "passthru": { 00:04:07.774 "name": "Passthru0", 00:04:07.774 "base_bdev_name": "Malloc2" 00:04:07.774 } 00:04:07.774 } 00:04:07.774 } 00:04:07.774 ]' 00:04:07.774 19:38:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:07.774 19:38:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:07.774 19:38:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:07.774 19:38:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.774 19:38:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.774 19:38:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.774 19:38:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:07.774 19:38:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.774 19:38:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.774 19:38:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.774 19:38:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:07.774 19:38:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.774 19:38:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.774 19:38:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.774 19:38:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:07.774 19:38:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:07.774 ************************************ 00:04:07.774 END TEST rpc_daemon_integrity 00:04:07.774 ************************************ 00:04:07.774 19:38:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:07.774 00:04:07.774 real 0m0.241s 00:04:07.774 user 0m0.134s 00:04:07.774 sys 0m0.030s 00:04:07.774 19:38:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:07.774 19:38:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.774 19:38:02 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:07.774 19:38:02 rpc -- rpc/rpc.sh@84 -- # killprocess 56084 00:04:07.774 19:38:02 rpc -- common/autotest_common.sh@954 -- # '[' -z 56084 ']' 00:04:07.774 19:38:02 rpc -- common/autotest_common.sh@958 -- # kill -0 56084 00:04:07.774 19:38:02 rpc -- common/autotest_common.sh@959 -- # uname 00:04:07.774 19:38:02 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:07.774 19:38:02 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56084 00:04:07.774 killing process with pid 56084 00:04:07.774 19:38:02 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:07.774 19:38:02 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:07.774 19:38:02 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56084' 00:04:07.774 19:38:02 rpc -- common/autotest_common.sh@973 -- # kill 56084 00:04:07.774 19:38:02 rpc -- common/autotest_common.sh@978 -- # wait 56084 00:04:08.034 00:04:08.034 real 0m2.299s 00:04:08.034 user 0m2.844s 00:04:08.034 sys 0m0.485s 00:04:08.034 ************************************ 00:04:08.034 END TEST rpc 00:04:08.034 ************************************ 00:04:08.034 19:38:03 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:08.034 19:38:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.034 19:38:03 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:08.034 19:38:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:08.034 19:38:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.034 19:38:03 -- common/autotest_common.sh@10 -- # set +x 00:04:08.034 ************************************ 00:04:08.034 START TEST skip_rpc 00:04:08.034 ************************************ 00:04:08.034 19:38:03 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:08.293 * Looking for test storage... 00:04:08.293 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:08.293 19:38:03 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:08.293 19:38:03 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:08.293 19:38:03 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:08.293 19:38:03 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:08.293 19:38:03 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:08.293 19:38:03 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:08.293 19:38:03 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:08.293 19:38:03 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:08.293 19:38:03 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:08.293 19:38:03 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:08.293 19:38:03 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:08.293 19:38:03 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:08.293 19:38:03 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:08.293 19:38:03 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:08.293 19:38:03 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:08.293 19:38:03 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:08.293 19:38:03 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:08.293 19:38:03 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:08.293 19:38:03 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:08.293 19:38:03 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:08.293 19:38:03 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:08.293 19:38:03 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:08.293 19:38:03 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:08.293 19:38:03 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:08.293 19:38:03 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:08.293 19:38:03 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:08.293 19:38:03 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:08.293 19:38:03 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:08.293 19:38:03 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:08.293 19:38:03 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:08.293 19:38:03 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:08.293 19:38:03 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:08.293 19:38:03 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:08.293 19:38:03 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:08.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.293 --rc genhtml_branch_coverage=1 00:04:08.293 --rc genhtml_function_coverage=1 00:04:08.293 --rc genhtml_legend=1 00:04:08.293 --rc geninfo_all_blocks=1 00:04:08.293 --rc geninfo_unexecuted_blocks=1 00:04:08.293 00:04:08.293 ' 00:04:08.293 19:38:03 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:08.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.293 --rc genhtml_branch_coverage=1 00:04:08.293 --rc genhtml_function_coverage=1 00:04:08.293 --rc genhtml_legend=1 00:04:08.293 --rc geninfo_all_blocks=1 00:04:08.293 --rc geninfo_unexecuted_blocks=1 00:04:08.293 00:04:08.293 ' 00:04:08.293 19:38:03 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:08.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.293 --rc genhtml_branch_coverage=1 00:04:08.293 --rc genhtml_function_coverage=1 00:04:08.293 --rc genhtml_legend=1 00:04:08.293 --rc geninfo_all_blocks=1 00:04:08.293 --rc geninfo_unexecuted_blocks=1 00:04:08.293 00:04:08.293 ' 00:04:08.293 19:38:03 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:08.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.293 --rc genhtml_branch_coverage=1 00:04:08.293 --rc genhtml_function_coverage=1 00:04:08.293 --rc genhtml_legend=1 00:04:08.293 --rc geninfo_all_blocks=1 00:04:08.293 --rc geninfo_unexecuted_blocks=1 00:04:08.293 00:04:08.293 ' 00:04:08.293 19:38:03 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:08.293 19:38:03 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:08.293 19:38:03 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:08.293 19:38:03 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:08.293 19:38:03 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.293 19:38:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.293 ************************************ 00:04:08.293 START TEST skip_rpc 00:04:08.293 ************************************ 00:04:08.293 19:38:03 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:08.293 19:38:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56285 00:04:08.293 19:38:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:08.293 19:38:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:08.293 19:38:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:08.293 [2024-11-26 19:38:03.455119] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:04:08.293 [2024-11-26 19:38:03.455200] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56285 ] 00:04:08.551 [2024-11-26 19:38:03.591271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:08.551 [2024-11-26 19:38:03.634290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.551 [2024-11-26 19:38:03.686025] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:13.855 19:38:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:13.855 19:38:08 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:13.855 19:38:08 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:13.855 19:38:08 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:13.855 19:38:08 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:13.855 19:38:08 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:13.855 19:38:08 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:13.855 19:38:08 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:13.855 19:38:08 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.855 19:38:08 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.855 19:38:08 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:13.855 19:38:08 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:13.855 19:38:08 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:13.855 19:38:08 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:13.855 19:38:08 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:13.855 19:38:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:13.855 19:38:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56285 00:04:13.855 19:38:08 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 56285 ']' 00:04:13.855 19:38:08 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 56285 00:04:13.855 19:38:08 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:13.855 19:38:08 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:13.855 19:38:08 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56285 00:04:13.855 19:38:08 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:13.855 19:38:08 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:13.855 killing process with pid 56285 00:04:13.855 19:38:08 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56285' 00:04:13.855 19:38:08 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 56285 00:04:13.855 19:38:08 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 56285 00:04:13.855 00:04:13.855 real 0m5.227s 00:04:13.855 user 0m4.949s 00:04:13.855 sys 0m0.180s 00:04:13.855 19:38:08 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:13.855 19:38:08 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.855 ************************************ 00:04:13.855 END TEST skip_rpc 00:04:13.855 ************************************ 00:04:13.855 19:38:08 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:13.855 19:38:08 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:13.855 19:38:08 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:13.855 19:38:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.855 ************************************ 00:04:13.855 START TEST skip_rpc_with_json 00:04:13.855 ************************************ 00:04:13.855 19:38:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:13.855 19:38:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:13.855 19:38:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=56366 00:04:13.855 19:38:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:13.855 19:38:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:13.855 19:38:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 56366 00:04:13.855 19:38:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 56366 ']' 00:04:13.855 19:38:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:13.855 19:38:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:13.855 19:38:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:13.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:13.855 19:38:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:13.855 19:38:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:13.855 [2024-11-26 19:38:08.729138] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:04:13.855 [2024-11-26 19:38:08.729243] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56366 ] 00:04:13.855 [2024-11-26 19:38:08.880110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.855 [2024-11-26 19:38:08.917203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.855 [2024-11-26 19:38:08.965128] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:14.482 19:38:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:14.482 19:38:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:14.482 19:38:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:14.482 19:38:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.482 19:38:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:14.482 [2024-11-26 19:38:09.605047] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:14.482 request: 00:04:14.482 { 00:04:14.482 "trtype": "tcp", 00:04:14.482 "method": "nvmf_get_transports", 00:04:14.482 "req_id": 1 00:04:14.482 } 00:04:14.482 Got JSON-RPC error response 00:04:14.482 response: 00:04:14.482 { 00:04:14.482 "code": -19, 00:04:14.482 "message": "No such device" 00:04:14.482 } 00:04:14.482 19:38:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:14.482 19:38:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:14.482 19:38:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.482 19:38:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:14.482 [2024-11-26 19:38:09.613146] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:14.482 19:38:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.482 19:38:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:14.482 19:38:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.482 19:38:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:14.741 19:38:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.741 19:38:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:14.741 { 00:04:14.741 "subsystems": [ 00:04:14.741 { 00:04:14.741 "subsystem": "fsdev", 00:04:14.741 "config": [ 00:04:14.741 { 00:04:14.741 "method": "fsdev_set_opts", 00:04:14.741 "params": { 00:04:14.741 "fsdev_io_pool_size": 65535, 00:04:14.741 "fsdev_io_cache_size": 256 00:04:14.741 } 00:04:14.741 } 00:04:14.741 ] 00:04:14.741 }, 00:04:14.741 { 00:04:14.741 "subsystem": "keyring", 00:04:14.741 "config": [] 00:04:14.741 }, 00:04:14.741 { 00:04:14.741 "subsystem": "iobuf", 00:04:14.741 "config": [ 00:04:14.741 { 00:04:14.741 "method": "iobuf_set_options", 00:04:14.741 "params": { 00:04:14.741 "small_pool_count": 8192, 00:04:14.741 "large_pool_count": 1024, 00:04:14.741 "small_bufsize": 8192, 00:04:14.741 "large_bufsize": 135168, 00:04:14.741 "enable_numa": false 00:04:14.741 } 00:04:14.741 } 00:04:14.741 ] 00:04:14.741 }, 00:04:14.741 { 00:04:14.741 "subsystem": "sock", 00:04:14.741 "config": [ 00:04:14.741 { 00:04:14.741 "method": "sock_set_default_impl", 00:04:14.741 "params": { 00:04:14.741 "impl_name": "uring" 00:04:14.741 } 00:04:14.741 }, 00:04:14.741 { 00:04:14.741 "method": "sock_impl_set_options", 00:04:14.741 "params": { 00:04:14.741 "impl_name": "ssl", 00:04:14.741 "recv_buf_size": 4096, 00:04:14.741 "send_buf_size": 4096, 00:04:14.741 "enable_recv_pipe": true, 00:04:14.741 "enable_quickack": false, 00:04:14.741 "enable_placement_id": 0, 00:04:14.741 "enable_zerocopy_send_server": true, 00:04:14.741 "enable_zerocopy_send_client": false, 00:04:14.741 "zerocopy_threshold": 0, 00:04:14.741 "tls_version": 0, 00:04:14.741 "enable_ktls": false 00:04:14.741 } 00:04:14.741 }, 00:04:14.741 { 00:04:14.741 "method": "sock_impl_set_options", 00:04:14.741 "params": { 00:04:14.741 "impl_name": "posix", 00:04:14.741 "recv_buf_size": 2097152, 00:04:14.741 "send_buf_size": 2097152, 00:04:14.741 "enable_recv_pipe": true, 00:04:14.741 "enable_quickack": false, 00:04:14.741 "enable_placement_id": 0, 00:04:14.741 "enable_zerocopy_send_server": true, 00:04:14.741 "enable_zerocopy_send_client": false, 00:04:14.741 "zerocopy_threshold": 0, 00:04:14.741 "tls_version": 0, 00:04:14.741 "enable_ktls": false 00:04:14.741 } 00:04:14.741 }, 00:04:14.741 { 00:04:14.741 "method": "sock_impl_set_options", 00:04:14.741 "params": { 00:04:14.741 "impl_name": "uring", 00:04:14.741 "recv_buf_size": 2097152, 00:04:14.741 "send_buf_size": 2097152, 00:04:14.741 "enable_recv_pipe": true, 00:04:14.741 "enable_quickack": false, 00:04:14.741 "enable_placement_id": 0, 00:04:14.741 "enable_zerocopy_send_server": false, 00:04:14.741 "enable_zerocopy_send_client": false, 00:04:14.741 "zerocopy_threshold": 0, 00:04:14.741 "tls_version": 0, 00:04:14.741 "enable_ktls": false 00:04:14.741 } 00:04:14.741 } 00:04:14.741 ] 00:04:14.741 }, 00:04:14.741 { 00:04:14.741 "subsystem": "vmd", 00:04:14.741 "config": [] 00:04:14.741 }, 00:04:14.741 { 00:04:14.741 "subsystem": "accel", 00:04:14.741 "config": [ 00:04:14.741 { 00:04:14.741 "method": "accel_set_options", 00:04:14.741 "params": { 00:04:14.741 "small_cache_size": 128, 00:04:14.741 "large_cache_size": 16, 00:04:14.741 "task_count": 2048, 00:04:14.741 "sequence_count": 2048, 00:04:14.741 "buf_count": 2048 00:04:14.741 } 00:04:14.741 } 00:04:14.741 ] 00:04:14.741 }, 00:04:14.741 { 00:04:14.741 "subsystem": "bdev", 00:04:14.741 "config": [ 00:04:14.741 { 00:04:14.741 "method": "bdev_set_options", 00:04:14.741 "params": { 00:04:14.741 "bdev_io_pool_size": 65535, 00:04:14.741 "bdev_io_cache_size": 256, 00:04:14.741 "bdev_auto_examine": true, 00:04:14.741 "iobuf_small_cache_size": 128, 00:04:14.741 "iobuf_large_cache_size": 16 00:04:14.741 } 00:04:14.741 }, 00:04:14.741 { 00:04:14.741 "method": "bdev_raid_set_options", 00:04:14.741 "params": { 00:04:14.741 "process_window_size_kb": 1024, 00:04:14.741 "process_max_bandwidth_mb_sec": 0 00:04:14.741 } 00:04:14.741 }, 00:04:14.741 { 00:04:14.741 "method": "bdev_iscsi_set_options", 00:04:14.741 "params": { 00:04:14.741 "timeout_sec": 30 00:04:14.741 } 00:04:14.741 }, 00:04:14.741 { 00:04:14.741 "method": "bdev_nvme_set_options", 00:04:14.741 "params": { 00:04:14.741 "action_on_timeout": "none", 00:04:14.741 "timeout_us": 0, 00:04:14.741 "timeout_admin_us": 0, 00:04:14.741 "keep_alive_timeout_ms": 10000, 00:04:14.741 "arbitration_burst": 0, 00:04:14.741 "low_priority_weight": 0, 00:04:14.741 "medium_priority_weight": 0, 00:04:14.741 "high_priority_weight": 0, 00:04:14.741 "nvme_adminq_poll_period_us": 10000, 00:04:14.741 "nvme_ioq_poll_period_us": 0, 00:04:14.741 "io_queue_requests": 0, 00:04:14.741 "delay_cmd_submit": true, 00:04:14.741 "transport_retry_count": 4, 00:04:14.741 "bdev_retry_count": 3, 00:04:14.741 "transport_ack_timeout": 0, 00:04:14.741 "ctrlr_loss_timeout_sec": 0, 00:04:14.741 "reconnect_delay_sec": 0, 00:04:14.741 "fast_io_fail_timeout_sec": 0, 00:04:14.741 "disable_auto_failback": false, 00:04:14.741 "generate_uuids": false, 00:04:14.741 "transport_tos": 0, 00:04:14.741 "nvme_error_stat": false, 00:04:14.741 "rdma_srq_size": 0, 00:04:14.741 "io_path_stat": false, 00:04:14.741 "allow_accel_sequence": false, 00:04:14.741 "rdma_max_cq_size": 0, 00:04:14.741 "rdma_cm_event_timeout_ms": 0, 00:04:14.741 "dhchap_digests": [ 00:04:14.741 "sha256", 00:04:14.741 "sha384", 00:04:14.741 "sha512" 00:04:14.741 ], 00:04:14.741 "dhchap_dhgroups": [ 00:04:14.741 "null", 00:04:14.741 "ffdhe2048", 00:04:14.741 "ffdhe3072", 00:04:14.741 "ffdhe4096", 00:04:14.741 "ffdhe6144", 00:04:14.741 "ffdhe8192" 00:04:14.741 ] 00:04:14.741 } 00:04:14.741 }, 00:04:14.741 { 00:04:14.741 "method": "bdev_nvme_set_hotplug", 00:04:14.741 "params": { 00:04:14.741 "period_us": 100000, 00:04:14.741 "enable": false 00:04:14.741 } 00:04:14.741 }, 00:04:14.741 { 00:04:14.741 "method": "bdev_wait_for_examine" 00:04:14.742 } 00:04:14.742 ] 00:04:14.742 }, 00:04:14.742 { 00:04:14.742 "subsystem": "scsi", 00:04:14.742 "config": null 00:04:14.742 }, 00:04:14.742 { 00:04:14.742 "subsystem": "scheduler", 00:04:14.742 "config": [ 00:04:14.742 { 00:04:14.742 "method": "framework_set_scheduler", 00:04:14.742 "params": { 00:04:14.742 "name": "static" 00:04:14.742 } 00:04:14.742 } 00:04:14.742 ] 00:04:14.742 }, 00:04:14.742 { 00:04:14.742 "subsystem": "vhost_scsi", 00:04:14.742 "config": [] 00:04:14.742 }, 00:04:14.742 { 00:04:14.742 "subsystem": "vhost_blk", 00:04:14.742 "config": [] 00:04:14.742 }, 00:04:14.742 { 00:04:14.742 "subsystem": "ublk", 00:04:14.742 "config": [] 00:04:14.742 }, 00:04:14.742 { 00:04:14.742 "subsystem": "nbd", 00:04:14.742 "config": [] 00:04:14.742 }, 00:04:14.742 { 00:04:14.742 "subsystem": "nvmf", 00:04:14.742 "config": [ 00:04:14.742 { 00:04:14.742 "method": "nvmf_set_config", 00:04:14.742 "params": { 00:04:14.742 "discovery_filter": "match_any", 00:04:14.742 "admin_cmd_passthru": { 00:04:14.742 "identify_ctrlr": false 00:04:14.742 }, 00:04:14.742 "dhchap_digests": [ 00:04:14.742 "sha256", 00:04:14.742 "sha384", 00:04:14.742 "sha512" 00:04:14.742 ], 00:04:14.742 "dhchap_dhgroups": [ 00:04:14.742 "null", 00:04:14.742 "ffdhe2048", 00:04:14.742 "ffdhe3072", 00:04:14.742 "ffdhe4096", 00:04:14.742 "ffdhe6144", 00:04:14.742 "ffdhe8192" 00:04:14.742 ] 00:04:14.742 } 00:04:14.742 }, 00:04:14.742 { 00:04:14.742 "method": "nvmf_set_max_subsystems", 00:04:14.742 "params": { 00:04:14.742 "max_subsystems": 1024 00:04:14.742 } 00:04:14.742 }, 00:04:14.742 { 00:04:14.742 "method": "nvmf_set_crdt", 00:04:14.742 "params": { 00:04:14.742 "crdt1": 0, 00:04:14.742 "crdt2": 0, 00:04:14.742 "crdt3": 0 00:04:14.742 } 00:04:14.742 }, 00:04:14.742 { 00:04:14.742 "method": "nvmf_create_transport", 00:04:14.742 "params": { 00:04:14.742 "trtype": "TCP", 00:04:14.742 "max_queue_depth": 128, 00:04:14.742 "max_io_qpairs_per_ctrlr": 127, 00:04:14.742 "in_capsule_data_size": 4096, 00:04:14.742 "max_io_size": 131072, 00:04:14.742 "io_unit_size": 131072, 00:04:14.742 "max_aq_depth": 128, 00:04:14.742 "num_shared_buffers": 511, 00:04:14.742 "buf_cache_size": 4294967295, 00:04:14.742 "dif_insert_or_strip": false, 00:04:14.742 "zcopy": false, 00:04:14.742 "c2h_success": true, 00:04:14.742 "sock_priority": 0, 00:04:14.742 "abort_timeout_sec": 1, 00:04:14.742 "ack_timeout": 0, 00:04:14.742 "data_wr_pool_size": 0 00:04:14.742 } 00:04:14.742 } 00:04:14.742 ] 00:04:14.742 }, 00:04:14.742 { 00:04:14.742 "subsystem": "iscsi", 00:04:14.742 "config": [ 00:04:14.742 { 00:04:14.742 "method": "iscsi_set_options", 00:04:14.742 "params": { 00:04:14.742 "node_base": "iqn.2016-06.io.spdk", 00:04:14.742 "max_sessions": 128, 00:04:14.742 "max_connections_per_session": 2, 00:04:14.742 "max_queue_depth": 64, 00:04:14.742 "default_time2wait": 2, 00:04:14.742 "default_time2retain": 20, 00:04:14.742 "first_burst_length": 8192, 00:04:14.742 "immediate_data": true, 00:04:14.742 "allow_duplicated_isid": false, 00:04:14.742 "error_recovery_level": 0, 00:04:14.742 "nop_timeout": 60, 00:04:14.742 "nop_in_interval": 30, 00:04:14.742 "disable_chap": false, 00:04:14.742 "require_chap": false, 00:04:14.742 "mutual_chap": false, 00:04:14.742 "chap_group": 0, 00:04:14.742 "max_large_datain_per_connection": 64, 00:04:14.742 "max_r2t_per_connection": 4, 00:04:14.742 "pdu_pool_size": 36864, 00:04:14.742 "immediate_data_pool_size": 16384, 00:04:14.742 "data_out_pool_size": 2048 00:04:14.742 } 00:04:14.742 } 00:04:14.742 ] 00:04:14.742 } 00:04:14.742 ] 00:04:14.742 } 00:04:14.742 19:38:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:14.742 19:38:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 56366 00:04:14.742 19:38:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 56366 ']' 00:04:14.742 19:38:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 56366 00:04:14.742 19:38:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:14.742 19:38:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:14.742 19:38:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56366 00:04:14.742 19:38:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:14.742 19:38:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:14.742 killing process with pid 56366 00:04:14.742 19:38:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56366' 00:04:14.742 19:38:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 56366 00:04:14.742 19:38:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 56366 00:04:15.000 19:38:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=56393 00:04:15.000 19:38:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:15.000 19:38:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:20.282 19:38:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 56393 00:04:20.282 19:38:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 56393 ']' 00:04:20.282 19:38:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 56393 00:04:20.282 19:38:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:20.282 19:38:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:20.282 19:38:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56393 00:04:20.282 19:38:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:20.282 killing process with pid 56393 00:04:20.282 19:38:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:20.282 19:38:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56393' 00:04:20.282 19:38:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 56393 00:04:20.282 19:38:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 56393 00:04:20.282 19:38:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:20.282 19:38:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:20.282 00:04:20.282 real 0m6.557s 00:04:20.282 user 0m6.398s 00:04:20.282 sys 0m0.407s 00:04:20.282 ************************************ 00:04:20.282 END TEST skip_rpc_with_json 00:04:20.282 ************************************ 00:04:20.282 19:38:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.282 19:38:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:20.282 19:38:15 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:20.282 19:38:15 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.282 19:38:15 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.282 19:38:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.282 ************************************ 00:04:20.282 START TEST skip_rpc_with_delay 00:04:20.282 ************************************ 00:04:20.282 19:38:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:20.282 19:38:15 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:20.282 19:38:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:20.282 19:38:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:20.282 19:38:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:20.282 19:38:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:20.282 19:38:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:20.282 19:38:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:20.282 19:38:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:20.282 19:38:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:20.282 19:38:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:20.282 19:38:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:20.282 19:38:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:20.282 [2024-11-26 19:38:15.317684] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:20.282 19:38:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:20.282 19:38:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:20.282 19:38:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:20.282 19:38:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:20.282 00:04:20.282 real 0m0.058s 00:04:20.282 user 0m0.035s 00:04:20.282 sys 0m0.023s 00:04:20.282 19:38:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.282 19:38:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:20.282 ************************************ 00:04:20.282 END TEST skip_rpc_with_delay 00:04:20.282 ************************************ 00:04:20.282 19:38:15 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:20.282 19:38:15 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:20.282 19:38:15 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:20.282 19:38:15 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.282 19:38:15 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.282 19:38:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.282 ************************************ 00:04:20.282 START TEST exit_on_failed_rpc_init 00:04:20.282 ************************************ 00:04:20.282 19:38:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:20.282 19:38:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=56497 00:04:20.283 19:38:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 56497 00:04:20.283 19:38:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 56497 ']' 00:04:20.283 19:38:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:20.283 19:38:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:20.283 19:38:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:20.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:20.283 19:38:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:20.283 19:38:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:20.283 19:38:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:20.283 [2024-11-26 19:38:15.414875] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:04:20.283 [2024-11-26 19:38:15.414943] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56497 ] 00:04:20.541 [2024-11-26 19:38:15.550819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.541 [2024-11-26 19:38:15.585459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.541 [2024-11-26 19:38:15.631215] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:21.106 19:38:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:21.106 19:38:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:21.106 19:38:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:21.106 19:38:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:21.106 19:38:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:21.106 19:38:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:21.106 19:38:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:21.106 19:38:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:21.106 19:38:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:21.106 19:38:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:21.106 19:38:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:21.106 19:38:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:21.106 19:38:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:21.106 19:38:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:21.106 19:38:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:21.106 [2024-11-26 19:38:16.336149] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:04:21.106 [2024-11-26 19:38:16.336212] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56515 ] 00:04:21.376 [2024-11-26 19:38:16.474500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.376 [2024-11-26 19:38:16.511792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:21.376 [2024-11-26 19:38:16.511852] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:21.376 [2024-11-26 19:38:16.511860] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:21.376 [2024-11-26 19:38:16.511866] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:21.376 19:38:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:21.376 19:38:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:21.376 19:38:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:21.376 19:38:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:21.376 19:38:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:21.376 19:38:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:21.376 19:38:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:21.376 19:38:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 56497 00:04:21.376 19:38:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 56497 ']' 00:04:21.376 19:38:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 56497 00:04:21.376 19:38:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:21.376 19:38:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:21.376 19:38:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56497 00:04:21.376 19:38:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:21.376 19:38:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:21.376 killing process with pid 56497 00:04:21.376 19:38:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56497' 00:04:21.376 19:38:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 56497 00:04:21.376 19:38:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 56497 00:04:21.634 00:04:21.634 real 0m1.409s 00:04:21.634 user 0m1.641s 00:04:21.634 sys 0m0.242s 00:04:21.634 19:38:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.634 19:38:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:21.634 ************************************ 00:04:21.634 END TEST exit_on_failed_rpc_init 00:04:21.634 ************************************ 00:04:21.634 19:38:16 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:21.634 00:04:21.634 real 0m13.549s 00:04:21.634 user 0m13.172s 00:04:21.634 sys 0m1.002s 00:04:21.634 19:38:16 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.634 19:38:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.634 ************************************ 00:04:21.634 END TEST skip_rpc 00:04:21.634 ************************************ 00:04:21.634 19:38:16 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:21.634 19:38:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.634 19:38:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.634 19:38:16 -- common/autotest_common.sh@10 -- # set +x 00:04:21.634 ************************************ 00:04:21.634 START TEST rpc_client 00:04:21.634 ************************************ 00:04:21.634 19:38:16 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:21.892 * Looking for test storage... 00:04:21.892 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:21.892 19:38:16 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:21.892 19:38:16 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:21.892 19:38:16 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:21.892 19:38:16 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:21.892 19:38:16 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:21.892 19:38:16 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:21.892 19:38:16 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:21.892 19:38:16 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:21.892 19:38:16 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:21.892 19:38:16 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:21.892 19:38:16 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:21.892 19:38:16 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:21.892 19:38:16 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:21.892 19:38:16 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:21.892 19:38:16 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:21.892 19:38:16 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:21.892 19:38:16 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:21.892 19:38:16 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:21.892 19:38:16 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:21.892 19:38:16 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:21.892 19:38:16 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:21.892 19:38:16 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:21.892 19:38:16 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:21.892 19:38:16 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:21.892 19:38:16 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:21.892 19:38:16 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:21.892 19:38:16 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:21.892 19:38:16 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:21.892 19:38:16 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:21.892 19:38:16 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:21.892 19:38:16 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:21.892 19:38:16 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:21.892 19:38:16 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:21.892 19:38:16 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:21.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.892 --rc genhtml_branch_coverage=1 00:04:21.892 --rc genhtml_function_coverage=1 00:04:21.892 --rc genhtml_legend=1 00:04:21.892 --rc geninfo_all_blocks=1 00:04:21.892 --rc geninfo_unexecuted_blocks=1 00:04:21.892 00:04:21.892 ' 00:04:21.892 19:38:16 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:21.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.892 --rc genhtml_branch_coverage=1 00:04:21.892 --rc genhtml_function_coverage=1 00:04:21.892 --rc genhtml_legend=1 00:04:21.892 --rc geninfo_all_blocks=1 00:04:21.892 --rc geninfo_unexecuted_blocks=1 00:04:21.892 00:04:21.892 ' 00:04:21.892 19:38:16 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:21.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.892 --rc genhtml_branch_coverage=1 00:04:21.892 --rc genhtml_function_coverage=1 00:04:21.892 --rc genhtml_legend=1 00:04:21.892 --rc geninfo_all_blocks=1 00:04:21.892 --rc geninfo_unexecuted_blocks=1 00:04:21.892 00:04:21.892 ' 00:04:21.892 19:38:16 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:21.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.892 --rc genhtml_branch_coverage=1 00:04:21.892 --rc genhtml_function_coverage=1 00:04:21.892 --rc genhtml_legend=1 00:04:21.892 --rc geninfo_all_blocks=1 00:04:21.892 --rc geninfo_unexecuted_blocks=1 00:04:21.892 00:04:21.892 ' 00:04:21.892 19:38:16 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:21.892 OK 00:04:21.892 19:38:16 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:21.892 00:04:21.892 real 0m0.153s 00:04:21.892 user 0m0.090s 00:04:21.892 sys 0m0.072s 00:04:21.892 19:38:16 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.892 19:38:16 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:21.892 ************************************ 00:04:21.892 END TEST rpc_client 00:04:21.892 ************************************ 00:04:21.892 19:38:17 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:21.892 19:38:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.892 19:38:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.892 19:38:17 -- common/autotest_common.sh@10 -- # set +x 00:04:21.892 ************************************ 00:04:21.892 START TEST json_config 00:04:21.892 ************************************ 00:04:21.892 19:38:17 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:21.892 19:38:17 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:21.892 19:38:17 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:21.892 19:38:17 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:22.152 19:38:17 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:22.152 19:38:17 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:22.152 19:38:17 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:22.152 19:38:17 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:22.152 19:38:17 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:22.152 19:38:17 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:22.152 19:38:17 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:22.152 19:38:17 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:22.152 19:38:17 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:22.152 19:38:17 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:22.152 19:38:17 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:22.152 19:38:17 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:22.152 19:38:17 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:22.152 19:38:17 json_config -- scripts/common.sh@345 -- # : 1 00:04:22.152 19:38:17 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:22.152 19:38:17 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:22.152 19:38:17 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:22.152 19:38:17 json_config -- scripts/common.sh@353 -- # local d=1 00:04:22.152 19:38:17 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:22.152 19:38:17 json_config -- scripts/common.sh@355 -- # echo 1 00:04:22.152 19:38:17 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:22.152 19:38:17 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:22.152 19:38:17 json_config -- scripts/common.sh@353 -- # local d=2 00:04:22.152 19:38:17 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:22.152 19:38:17 json_config -- scripts/common.sh@355 -- # echo 2 00:04:22.152 19:38:17 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:22.152 19:38:17 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:22.152 19:38:17 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:22.152 19:38:17 json_config -- scripts/common.sh@368 -- # return 0 00:04:22.152 19:38:17 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:22.152 19:38:17 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:22.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.152 --rc genhtml_branch_coverage=1 00:04:22.152 --rc genhtml_function_coverage=1 00:04:22.152 --rc genhtml_legend=1 00:04:22.152 --rc geninfo_all_blocks=1 00:04:22.152 --rc geninfo_unexecuted_blocks=1 00:04:22.152 00:04:22.152 ' 00:04:22.152 19:38:17 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:22.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.152 --rc genhtml_branch_coverage=1 00:04:22.152 --rc genhtml_function_coverage=1 00:04:22.152 --rc genhtml_legend=1 00:04:22.152 --rc geninfo_all_blocks=1 00:04:22.152 --rc geninfo_unexecuted_blocks=1 00:04:22.152 00:04:22.152 ' 00:04:22.152 19:38:17 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:22.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.152 --rc genhtml_branch_coverage=1 00:04:22.152 --rc genhtml_function_coverage=1 00:04:22.152 --rc genhtml_legend=1 00:04:22.152 --rc geninfo_all_blocks=1 00:04:22.152 --rc geninfo_unexecuted_blocks=1 00:04:22.152 00:04:22.152 ' 00:04:22.152 19:38:17 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:22.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.152 --rc genhtml_branch_coverage=1 00:04:22.152 --rc genhtml_function_coverage=1 00:04:22.152 --rc genhtml_legend=1 00:04:22.152 --rc geninfo_all_blocks=1 00:04:22.152 --rc geninfo_unexecuted_blocks=1 00:04:22.152 00:04:22.152 ' 00:04:22.152 19:38:17 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:22.152 19:38:17 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:22.152 19:38:17 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:22.152 19:38:17 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:22.152 19:38:17 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:22.152 19:38:17 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:22.152 19:38:17 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:22.152 19:38:17 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:22.152 19:38:17 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:22.152 19:38:17 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:22.152 19:38:17 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:22.152 19:38:17 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:22.152 19:38:17 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:04:22.152 19:38:17 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=91838eb1-5852-43eb-90b2-09876f360ab2 00:04:22.152 19:38:17 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:22.152 19:38:17 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:22.152 19:38:17 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:22.152 19:38:17 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:22.152 19:38:17 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:22.152 19:38:17 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:22.152 19:38:17 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:22.152 19:38:17 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:22.152 19:38:17 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:22.152 19:38:17 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.152 19:38:17 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.152 19:38:17 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.153 19:38:17 json_config -- paths/export.sh@5 -- # export PATH 00:04:22.153 19:38:17 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.153 19:38:17 json_config -- nvmf/common.sh@51 -- # : 0 00:04:22.153 19:38:17 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:22.153 19:38:17 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:22.153 19:38:17 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:22.153 19:38:17 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:22.153 19:38:17 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:22.153 19:38:17 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:22.153 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:22.153 19:38:17 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:22.153 19:38:17 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:22.153 19:38:17 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:22.153 19:38:17 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:22.153 19:38:17 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:22.153 19:38:17 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:22.153 19:38:17 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:22.153 19:38:17 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:22.153 19:38:17 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:22.153 19:38:17 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:22.153 19:38:17 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:22.153 19:38:17 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:22.153 19:38:17 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:22.153 19:38:17 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:22.153 19:38:17 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:22.153 19:38:17 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:22.153 19:38:17 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:22.153 19:38:17 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:22.153 INFO: JSON configuration test init 00:04:22.153 19:38:17 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:22.153 19:38:17 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:22.153 19:38:17 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:22.153 19:38:17 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:22.153 19:38:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.153 19:38:17 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:22.153 19:38:17 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:22.153 19:38:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.153 19:38:17 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:22.153 19:38:17 json_config -- json_config/common.sh@9 -- # local app=target 00:04:22.153 19:38:17 json_config -- json_config/common.sh@10 -- # shift 00:04:22.153 19:38:17 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:22.153 19:38:17 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:22.153 19:38:17 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:22.153 19:38:17 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:22.153 19:38:17 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:22.153 19:38:17 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=56648 00:04:22.153 Waiting for target to run... 00:04:22.153 19:38:17 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:22.153 19:38:17 json_config -- json_config/common.sh@25 -- # waitforlisten 56648 /var/tmp/spdk_tgt.sock 00:04:22.153 19:38:17 json_config -- common/autotest_common.sh@835 -- # '[' -z 56648 ']' 00:04:22.153 19:38:17 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:22.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:22.153 19:38:17 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:22.153 19:38:17 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:22.153 19:38:17 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:22.153 19:38:17 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:22.153 19:38:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.153 [2024-11-26 19:38:17.235172] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:04:22.153 [2024-11-26 19:38:17.235240] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56648 ] 00:04:22.410 [2024-11-26 19:38:17.541618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.410 [2024-11-26 19:38:17.570604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.975 19:38:18 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:22.975 19:38:18 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:22.975 00:04:22.975 19:38:18 json_config -- json_config/common.sh@26 -- # echo '' 00:04:22.975 19:38:18 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:22.975 19:38:18 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:22.975 19:38:18 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:22.975 19:38:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.975 19:38:18 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:22.975 19:38:18 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:22.975 19:38:18 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:22.975 19:38:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.975 19:38:18 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:22.975 19:38:18 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:22.975 19:38:18 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:23.232 [2024-11-26 19:38:18.358670] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:23.490 19:38:18 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:23.490 19:38:18 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:23.490 19:38:18 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:23.490 19:38:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:23.490 19:38:18 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:23.490 19:38:18 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:23.490 19:38:18 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:23.490 19:38:18 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:23.490 19:38:18 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:23.490 19:38:18 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:23.490 19:38:18 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:23.490 19:38:18 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:23.747 19:38:18 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:23.747 19:38:18 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:23.747 19:38:18 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:23.747 19:38:18 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:23.747 19:38:18 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:23.747 19:38:18 json_config -- json_config/json_config.sh@54 -- # sort 00:04:23.747 19:38:18 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:23.747 19:38:18 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:23.747 19:38:18 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:23.747 19:38:18 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:23.747 19:38:18 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:23.747 19:38:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:23.747 19:38:18 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:23.747 19:38:18 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:23.747 19:38:18 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:23.747 19:38:18 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:23.747 19:38:18 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:23.747 19:38:18 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:23.747 19:38:18 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:23.747 19:38:18 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:23.747 19:38:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:23.747 19:38:18 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:23.747 19:38:18 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:23.747 19:38:18 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:23.747 19:38:18 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:23.747 19:38:18 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:24.005 MallocForNvmf0 00:04:24.005 19:38:19 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:24.005 19:38:19 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:24.005 MallocForNvmf1 00:04:24.005 19:38:19 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:24.005 19:38:19 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:24.262 [2024-11-26 19:38:19.419665] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:24.262 19:38:19 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:24.262 19:38:19 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:24.519 19:38:19 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:24.519 19:38:19 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:24.777 19:38:19 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:24.777 19:38:19 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:25.034 19:38:20 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:25.035 19:38:20 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:25.035 [2024-11-26 19:38:20.264034] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:25.292 19:38:20 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:25.292 19:38:20 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:25.292 19:38:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.292 19:38:20 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:25.292 19:38:20 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:25.292 19:38:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.292 19:38:20 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:25.292 19:38:20 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:25.293 19:38:20 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:25.293 MallocBdevForConfigChangeCheck 00:04:25.625 19:38:20 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:25.625 19:38:20 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:25.625 19:38:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.625 19:38:20 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:25.625 19:38:20 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:25.882 INFO: shutting down applications... 00:04:25.882 19:38:20 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:25.882 19:38:20 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:25.882 19:38:20 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:25.882 19:38:20 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:25.882 19:38:20 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:26.138 Calling clear_iscsi_subsystem 00:04:26.139 Calling clear_nvmf_subsystem 00:04:26.139 Calling clear_nbd_subsystem 00:04:26.139 Calling clear_ublk_subsystem 00:04:26.139 Calling clear_vhost_blk_subsystem 00:04:26.139 Calling clear_vhost_scsi_subsystem 00:04:26.139 Calling clear_bdev_subsystem 00:04:26.139 19:38:21 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:26.139 19:38:21 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:26.139 19:38:21 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:26.139 19:38:21 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:26.139 19:38:21 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:26.139 19:38:21 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:26.395 19:38:21 json_config -- json_config/json_config.sh@352 -- # break 00:04:26.395 19:38:21 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:26.395 19:38:21 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:26.395 19:38:21 json_config -- json_config/common.sh@31 -- # local app=target 00:04:26.395 19:38:21 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:26.395 19:38:21 json_config -- json_config/common.sh@35 -- # [[ -n 56648 ]] 00:04:26.396 19:38:21 json_config -- json_config/common.sh@38 -- # kill -SIGINT 56648 00:04:26.396 19:38:21 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:26.396 19:38:21 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:26.396 19:38:21 json_config -- json_config/common.sh@41 -- # kill -0 56648 00:04:26.396 19:38:21 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:26.962 19:38:22 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:26.962 19:38:22 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:26.962 19:38:22 json_config -- json_config/common.sh@41 -- # kill -0 56648 00:04:26.962 19:38:22 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:26.962 19:38:22 json_config -- json_config/common.sh@43 -- # break 00:04:26.962 SPDK target shutdown done 00:04:26.962 19:38:22 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:26.962 19:38:22 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:26.962 INFO: relaunching applications... 00:04:26.962 19:38:22 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:26.962 19:38:22 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:26.962 19:38:22 json_config -- json_config/common.sh@9 -- # local app=target 00:04:26.962 19:38:22 json_config -- json_config/common.sh@10 -- # shift 00:04:26.962 19:38:22 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:26.962 19:38:22 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:26.962 19:38:22 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:26.962 19:38:22 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:26.962 19:38:22 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:26.962 Waiting for target to run... 00:04:26.962 19:38:22 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=56834 00:04:26.962 19:38:22 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:26.962 19:38:22 json_config -- json_config/common.sh@25 -- # waitforlisten 56834 /var/tmp/spdk_tgt.sock 00:04:26.962 19:38:22 json_config -- common/autotest_common.sh@835 -- # '[' -z 56834 ']' 00:04:26.962 19:38:22 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:26.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:26.962 19:38:22 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:26.962 19:38:22 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:26.962 19:38:22 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:26.962 19:38:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.962 19:38:22 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:26.962 [2024-11-26 19:38:22.103485] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:04:26.962 [2024-11-26 19:38:22.103549] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56834 ] 00:04:27.219 [2024-11-26 19:38:22.405013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.219 [2024-11-26 19:38:22.430050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.476 [2024-11-26 19:38:22.564454] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:27.733 [2024-11-26 19:38:22.768328] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:27.733 [2024-11-26 19:38:22.800306] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:27.992 19:38:23 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:27.992 19:38:23 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:27.992 00:04:27.992 19:38:23 json_config -- json_config/common.sh@26 -- # echo '' 00:04:27.992 19:38:23 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:27.992 INFO: Checking if target configuration is the same... 00:04:27.992 19:38:23 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:27.992 19:38:23 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:27.992 19:38:23 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:27.992 19:38:23 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:27.992 + '[' 2 -ne 2 ']' 00:04:27.992 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:27.992 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:27.992 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:27.992 +++ basename /dev/fd/62 00:04:27.992 ++ mktemp /tmp/62.XXX 00:04:27.992 + tmp_file_1=/tmp/62.Evk 00:04:27.992 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:27.992 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:27.992 + tmp_file_2=/tmp/spdk_tgt_config.json.XtS 00:04:27.992 + ret=0 00:04:27.992 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:28.249 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:28.249 + diff -u /tmp/62.Evk /tmp/spdk_tgt_config.json.XtS 00:04:28.249 + echo 'INFO: JSON config files are the same' 00:04:28.249 INFO: JSON config files are the same 00:04:28.249 + rm /tmp/62.Evk /tmp/spdk_tgt_config.json.XtS 00:04:28.249 + exit 0 00:04:28.249 19:38:23 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:28.249 INFO: changing configuration and checking if this can be detected... 00:04:28.249 19:38:23 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:28.249 19:38:23 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:28.249 19:38:23 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:28.509 19:38:23 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:28.509 19:38:23 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:28.509 19:38:23 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:28.509 + '[' 2 -ne 2 ']' 00:04:28.509 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:28.509 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:28.509 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:28.509 +++ basename /dev/fd/62 00:04:28.509 ++ mktemp /tmp/62.XXX 00:04:28.509 + tmp_file_1=/tmp/62.pto 00:04:28.509 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:28.509 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:28.509 + tmp_file_2=/tmp/spdk_tgt_config.json.rOz 00:04:28.509 + ret=0 00:04:28.509 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:28.767 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:29.029 + diff -u /tmp/62.pto /tmp/spdk_tgt_config.json.rOz 00:04:29.029 + ret=1 00:04:29.029 + echo '=== Start of file: /tmp/62.pto ===' 00:04:29.029 + cat /tmp/62.pto 00:04:29.029 + echo '=== End of file: /tmp/62.pto ===' 00:04:29.029 + echo '' 00:04:29.029 + echo '=== Start of file: /tmp/spdk_tgt_config.json.rOz ===' 00:04:29.029 + cat /tmp/spdk_tgt_config.json.rOz 00:04:29.029 + echo '=== End of file: /tmp/spdk_tgt_config.json.rOz ===' 00:04:29.029 + echo '' 00:04:29.029 + rm /tmp/62.pto /tmp/spdk_tgt_config.json.rOz 00:04:29.029 + exit 1 00:04:29.029 INFO: configuration change detected. 00:04:29.029 19:38:24 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:29.029 19:38:24 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:29.029 19:38:24 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:29.029 19:38:24 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:29.029 19:38:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.029 19:38:24 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:29.029 19:38:24 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:29.029 19:38:24 json_config -- json_config/json_config.sh@324 -- # [[ -n 56834 ]] 00:04:29.029 19:38:24 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:29.029 19:38:24 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:29.029 19:38:24 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:29.029 19:38:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.029 19:38:24 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:29.029 19:38:24 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:29.029 19:38:24 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:29.029 19:38:24 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:29.029 19:38:24 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:29.029 19:38:24 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:29.029 19:38:24 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:29.029 19:38:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.029 19:38:24 json_config -- json_config/json_config.sh@330 -- # killprocess 56834 00:04:29.029 19:38:24 json_config -- common/autotest_common.sh@954 -- # '[' -z 56834 ']' 00:04:29.029 19:38:24 json_config -- common/autotest_common.sh@958 -- # kill -0 56834 00:04:29.029 19:38:24 json_config -- common/autotest_common.sh@959 -- # uname 00:04:29.029 19:38:24 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:29.029 19:38:24 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56834 00:04:29.029 19:38:24 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:29.029 killing process with pid 56834 00:04:29.029 19:38:24 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:29.029 19:38:24 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56834' 00:04:29.029 19:38:24 json_config -- common/autotest_common.sh@973 -- # kill 56834 00:04:29.029 19:38:24 json_config -- common/autotest_common.sh@978 -- # wait 56834 00:04:29.029 19:38:24 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:29.029 19:38:24 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:29.029 19:38:24 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:29.029 19:38:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.029 19:38:24 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:29.029 INFO: Success 00:04:29.029 19:38:24 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:29.029 00:04:29.029 real 0m7.230s 00:04:29.029 user 0m10.104s 00:04:29.029 sys 0m1.161s 00:04:29.029 19:38:24 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.029 ************************************ 00:04:29.029 END TEST json_config 00:04:29.029 19:38:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.029 ************************************ 00:04:29.287 19:38:24 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:29.287 19:38:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.287 19:38:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.287 19:38:24 -- common/autotest_common.sh@10 -- # set +x 00:04:29.287 ************************************ 00:04:29.287 START TEST json_config_extra_key 00:04:29.287 ************************************ 00:04:29.287 19:38:24 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:29.287 19:38:24 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:29.287 19:38:24 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:29.287 19:38:24 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:29.287 19:38:24 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:29.287 19:38:24 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:29.287 19:38:24 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:29.287 19:38:24 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:29.287 19:38:24 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:29.287 19:38:24 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:29.287 19:38:24 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:29.287 19:38:24 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:29.287 19:38:24 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:29.287 19:38:24 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:29.287 19:38:24 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:29.287 19:38:24 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:29.287 19:38:24 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:29.287 19:38:24 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:29.287 19:38:24 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:29.287 19:38:24 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:29.287 19:38:24 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:29.287 19:38:24 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:29.287 19:38:24 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:29.287 19:38:24 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:29.287 19:38:24 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:29.287 19:38:24 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:29.287 19:38:24 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:29.287 19:38:24 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:29.287 19:38:24 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:29.287 19:38:24 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:29.287 19:38:24 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:29.287 19:38:24 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:29.287 19:38:24 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:29.287 19:38:24 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:29.287 19:38:24 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:29.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.287 --rc genhtml_branch_coverage=1 00:04:29.287 --rc genhtml_function_coverage=1 00:04:29.287 --rc genhtml_legend=1 00:04:29.287 --rc geninfo_all_blocks=1 00:04:29.287 --rc geninfo_unexecuted_blocks=1 00:04:29.287 00:04:29.287 ' 00:04:29.287 19:38:24 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:29.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.287 --rc genhtml_branch_coverage=1 00:04:29.287 --rc genhtml_function_coverage=1 00:04:29.287 --rc genhtml_legend=1 00:04:29.287 --rc geninfo_all_blocks=1 00:04:29.287 --rc geninfo_unexecuted_blocks=1 00:04:29.287 00:04:29.287 ' 00:04:29.287 19:38:24 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:29.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.287 --rc genhtml_branch_coverage=1 00:04:29.287 --rc genhtml_function_coverage=1 00:04:29.287 --rc genhtml_legend=1 00:04:29.287 --rc geninfo_all_blocks=1 00:04:29.287 --rc geninfo_unexecuted_blocks=1 00:04:29.287 00:04:29.287 ' 00:04:29.287 19:38:24 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:29.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.287 --rc genhtml_branch_coverage=1 00:04:29.287 --rc genhtml_function_coverage=1 00:04:29.287 --rc genhtml_legend=1 00:04:29.287 --rc geninfo_all_blocks=1 00:04:29.287 --rc geninfo_unexecuted_blocks=1 00:04:29.287 00:04:29.287 ' 00:04:29.287 19:38:24 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:29.287 19:38:24 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:29.287 19:38:24 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:29.287 19:38:24 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:29.287 19:38:24 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:29.287 19:38:24 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:29.287 19:38:24 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:29.287 19:38:24 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:29.287 19:38:24 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:29.287 19:38:24 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:29.287 19:38:24 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:29.287 19:38:24 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:29.287 19:38:24 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:04:29.287 19:38:24 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=91838eb1-5852-43eb-90b2-09876f360ab2 00:04:29.287 19:38:24 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:29.287 19:38:24 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:29.287 19:38:24 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:29.287 19:38:24 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:29.287 19:38:24 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:29.287 19:38:24 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:29.288 19:38:24 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:29.288 19:38:24 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:29.288 19:38:24 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:29.288 19:38:24 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.288 19:38:24 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.288 19:38:24 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.288 19:38:24 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:29.288 19:38:24 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.288 19:38:24 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:29.288 19:38:24 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:29.288 19:38:24 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:29.288 19:38:24 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:29.288 19:38:24 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:29.288 19:38:24 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:29.288 19:38:24 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:29.288 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:29.288 19:38:24 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:29.288 19:38:24 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:29.288 19:38:24 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:29.288 19:38:24 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:29.288 19:38:24 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:29.288 19:38:24 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:29.288 19:38:24 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:29.288 19:38:24 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:29.288 19:38:24 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:29.288 19:38:24 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:29.288 19:38:24 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:29.288 19:38:24 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:29.288 19:38:24 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:29.288 INFO: launching applications... 00:04:29.288 19:38:24 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:29.288 19:38:24 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:29.288 19:38:24 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:29.288 19:38:24 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:29.288 19:38:24 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:29.288 19:38:24 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:29.288 19:38:24 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:29.288 19:38:24 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:29.288 19:38:24 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:29.288 19:38:24 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=56982 00:04:29.288 Waiting for target to run... 00:04:29.288 19:38:24 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:29.288 19:38:24 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 56982 /var/tmp/spdk_tgt.sock 00:04:29.288 19:38:24 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:29.288 19:38:24 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 56982 ']' 00:04:29.288 19:38:24 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:29.288 19:38:24 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:29.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:29.288 19:38:24 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:29.288 19:38:24 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:29.288 19:38:24 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:29.288 [2024-11-26 19:38:24.475962] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:04:29.288 [2024-11-26 19:38:24.476029] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56982 ] 00:04:29.546 [2024-11-26 19:38:24.777375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.808 [2024-11-26 19:38:24.806123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.808 [2024-11-26 19:38:24.837235] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:30.068 19:38:25 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:30.068 19:38:25 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:30.068 00:04:30.068 19:38:25 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:30.068 INFO: shutting down applications... 00:04:30.068 19:38:25 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:30.068 19:38:25 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:30.068 19:38:25 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:30.068 19:38:25 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:30.068 19:38:25 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 56982 ]] 00:04:30.068 19:38:25 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 56982 00:04:30.068 19:38:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:30.068 19:38:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:30.068 19:38:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 56982 00:04:30.068 19:38:25 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:30.640 19:38:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:30.640 19:38:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:30.640 19:38:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 56982 00:04:30.640 19:38:25 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:30.640 SPDK target shutdown done 00:04:30.640 19:38:25 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:30.640 19:38:25 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:30.640 19:38:25 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:30.640 Success 00:04:30.640 19:38:25 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:30.640 00:04:30.640 real 0m1.509s 00:04:30.640 user 0m1.187s 00:04:30.640 sys 0m0.267s 00:04:30.640 ************************************ 00:04:30.640 END TEST json_config_extra_key 00:04:30.640 ************************************ 00:04:30.640 19:38:25 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.640 19:38:25 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:30.640 19:38:25 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:30.640 19:38:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.640 19:38:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.640 19:38:25 -- common/autotest_common.sh@10 -- # set +x 00:04:30.640 ************************************ 00:04:30.640 START TEST alias_rpc 00:04:30.640 ************************************ 00:04:30.640 19:38:25 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:30.900 * Looking for test storage... 00:04:30.900 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:30.900 19:38:25 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:30.900 19:38:25 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:30.900 19:38:25 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:30.900 19:38:25 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:30.900 19:38:25 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:30.900 19:38:25 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:30.900 19:38:25 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:30.900 19:38:25 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:30.900 19:38:25 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:30.900 19:38:25 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:30.900 19:38:25 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:30.900 19:38:25 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:30.900 19:38:25 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:30.900 19:38:25 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:30.900 19:38:25 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:30.900 19:38:25 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:30.900 19:38:25 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:30.900 19:38:25 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:30.900 19:38:25 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:30.900 19:38:25 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:30.900 19:38:25 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:30.900 19:38:25 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:30.900 19:38:25 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:30.900 19:38:25 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:30.900 19:38:25 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:30.900 19:38:25 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:30.900 19:38:25 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:30.900 19:38:25 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:30.900 19:38:25 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:30.900 19:38:25 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:30.900 19:38:25 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:30.900 19:38:25 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:30.900 19:38:25 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.900 19:38:25 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:30.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.900 --rc genhtml_branch_coverage=1 00:04:30.900 --rc genhtml_function_coverage=1 00:04:30.900 --rc genhtml_legend=1 00:04:30.900 --rc geninfo_all_blocks=1 00:04:30.900 --rc geninfo_unexecuted_blocks=1 00:04:30.900 00:04:30.900 ' 00:04:30.900 19:38:25 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:30.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.900 --rc genhtml_branch_coverage=1 00:04:30.900 --rc genhtml_function_coverage=1 00:04:30.900 --rc genhtml_legend=1 00:04:30.900 --rc geninfo_all_blocks=1 00:04:30.900 --rc geninfo_unexecuted_blocks=1 00:04:30.900 00:04:30.900 ' 00:04:30.900 19:38:25 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:30.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.900 --rc genhtml_branch_coverage=1 00:04:30.900 --rc genhtml_function_coverage=1 00:04:30.900 --rc genhtml_legend=1 00:04:30.900 --rc geninfo_all_blocks=1 00:04:30.900 --rc geninfo_unexecuted_blocks=1 00:04:30.900 00:04:30.900 ' 00:04:30.900 19:38:25 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:30.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.900 --rc genhtml_branch_coverage=1 00:04:30.900 --rc genhtml_function_coverage=1 00:04:30.900 --rc genhtml_legend=1 00:04:30.900 --rc geninfo_all_blocks=1 00:04:30.900 --rc geninfo_unexecuted_blocks=1 00:04:30.900 00:04:30.900 ' 00:04:30.900 19:38:25 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:30.900 19:38:25 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57055 00:04:30.900 19:38:25 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57055 00:04:30.900 19:38:25 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57055 ']' 00:04:30.900 19:38:25 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.900 19:38:25 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:30.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.900 19:38:25 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.900 19:38:25 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:30.900 19:38:25 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.900 19:38:25 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:30.900 [2024-11-26 19:38:26.039152] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:04:30.900 [2024-11-26 19:38:26.039383] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57055 ] 00:04:31.159 [2024-11-26 19:38:26.180168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.159 [2024-11-26 19:38:26.218184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.159 [2024-11-26 19:38:26.266203] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:31.727 19:38:26 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:31.727 19:38:26 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:31.727 19:38:26 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:31.988 19:38:27 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57055 00:04:31.988 19:38:27 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57055 ']' 00:04:31.988 19:38:27 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57055 00:04:31.988 19:38:27 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:31.988 19:38:27 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:31.988 19:38:27 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57055 00:04:31.988 19:38:27 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:31.988 19:38:27 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:31.988 killing process with pid 57055 00:04:31.988 19:38:27 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57055' 00:04:31.988 19:38:27 alias_rpc -- common/autotest_common.sh@973 -- # kill 57055 00:04:31.988 19:38:27 alias_rpc -- common/autotest_common.sh@978 -- # wait 57055 00:04:32.247 00:04:32.247 real 0m1.470s 00:04:32.247 user 0m1.630s 00:04:32.247 sys 0m0.310s 00:04:32.247 19:38:27 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.247 19:38:27 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.247 ************************************ 00:04:32.247 END TEST alias_rpc 00:04:32.247 ************************************ 00:04:32.247 19:38:27 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:32.247 19:38:27 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:32.247 19:38:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.247 19:38:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.247 19:38:27 -- common/autotest_common.sh@10 -- # set +x 00:04:32.247 ************************************ 00:04:32.247 START TEST spdkcli_tcp 00:04:32.247 ************************************ 00:04:32.247 19:38:27 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:32.247 * Looking for test storage... 00:04:32.247 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:32.247 19:38:27 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:32.247 19:38:27 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:32.247 19:38:27 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:32.507 19:38:27 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:32.507 19:38:27 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:32.507 19:38:27 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:32.507 19:38:27 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:32.507 19:38:27 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:32.507 19:38:27 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:32.507 19:38:27 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:32.507 19:38:27 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:32.507 19:38:27 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:32.507 19:38:27 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:32.507 19:38:27 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:32.507 19:38:27 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:32.507 19:38:27 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:32.507 19:38:27 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:32.507 19:38:27 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:32.507 19:38:27 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:32.507 19:38:27 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:32.507 19:38:27 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:32.507 19:38:27 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:32.507 19:38:27 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:32.507 19:38:27 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:32.507 19:38:27 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:32.507 19:38:27 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:32.507 19:38:27 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:32.507 19:38:27 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:32.507 19:38:27 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:32.507 19:38:27 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:32.507 19:38:27 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:32.507 19:38:27 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:32.507 19:38:27 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:32.507 19:38:27 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:32.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.507 --rc genhtml_branch_coverage=1 00:04:32.507 --rc genhtml_function_coverage=1 00:04:32.507 --rc genhtml_legend=1 00:04:32.507 --rc geninfo_all_blocks=1 00:04:32.507 --rc geninfo_unexecuted_blocks=1 00:04:32.507 00:04:32.507 ' 00:04:32.507 19:38:27 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:32.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.507 --rc genhtml_branch_coverage=1 00:04:32.507 --rc genhtml_function_coverage=1 00:04:32.507 --rc genhtml_legend=1 00:04:32.507 --rc geninfo_all_blocks=1 00:04:32.507 --rc geninfo_unexecuted_blocks=1 00:04:32.507 00:04:32.507 ' 00:04:32.507 19:38:27 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:32.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.507 --rc genhtml_branch_coverage=1 00:04:32.507 --rc genhtml_function_coverage=1 00:04:32.507 --rc genhtml_legend=1 00:04:32.507 --rc geninfo_all_blocks=1 00:04:32.507 --rc geninfo_unexecuted_blocks=1 00:04:32.507 00:04:32.507 ' 00:04:32.507 19:38:27 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:32.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.507 --rc genhtml_branch_coverage=1 00:04:32.507 --rc genhtml_function_coverage=1 00:04:32.507 --rc genhtml_legend=1 00:04:32.507 --rc geninfo_all_blocks=1 00:04:32.507 --rc geninfo_unexecuted_blocks=1 00:04:32.507 00:04:32.507 ' 00:04:32.507 19:38:27 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:32.508 19:38:27 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:32.508 19:38:27 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:32.508 19:38:27 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:32.508 19:38:27 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:32.508 19:38:27 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:32.508 19:38:27 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:32.508 19:38:27 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:32.508 19:38:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:32.508 19:38:27 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57133 00:04:32.508 19:38:27 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:32.508 19:38:27 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57133 00:04:32.508 19:38:27 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57133 ']' 00:04:32.508 19:38:27 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.508 19:38:27 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:32.508 19:38:27 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.508 19:38:27 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:32.508 19:38:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:32.508 [2024-11-26 19:38:27.574698] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:04:32.508 [2024-11-26 19:38:27.574907] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57133 ] 00:04:32.508 [2024-11-26 19:38:27.715786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:32.769 [2024-11-26 19:38:27.753550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:32.769 [2024-11-26 19:38:27.753753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.769 [2024-11-26 19:38:27.800057] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:33.338 19:38:28 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:33.338 19:38:28 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:33.338 19:38:28 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:33.338 19:38:28 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57145 00:04:33.338 19:38:28 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:33.599 [ 00:04:33.599 "bdev_malloc_delete", 00:04:33.599 "bdev_malloc_create", 00:04:33.599 "bdev_null_resize", 00:04:33.599 "bdev_null_delete", 00:04:33.599 "bdev_null_create", 00:04:33.599 "bdev_nvme_cuse_unregister", 00:04:33.599 "bdev_nvme_cuse_register", 00:04:33.599 "bdev_opal_new_user", 00:04:33.599 "bdev_opal_set_lock_state", 00:04:33.599 "bdev_opal_delete", 00:04:33.599 "bdev_opal_get_info", 00:04:33.599 "bdev_opal_create", 00:04:33.599 "bdev_nvme_opal_revert", 00:04:33.599 "bdev_nvme_opal_init", 00:04:33.599 "bdev_nvme_send_cmd", 00:04:33.599 "bdev_nvme_set_keys", 00:04:33.599 "bdev_nvme_get_path_iostat", 00:04:33.599 "bdev_nvme_get_mdns_discovery_info", 00:04:33.599 "bdev_nvme_stop_mdns_discovery", 00:04:33.599 "bdev_nvme_start_mdns_discovery", 00:04:33.599 "bdev_nvme_set_multipath_policy", 00:04:33.599 "bdev_nvme_set_preferred_path", 00:04:33.599 "bdev_nvme_get_io_paths", 00:04:33.599 "bdev_nvme_remove_error_injection", 00:04:33.599 "bdev_nvme_add_error_injection", 00:04:33.599 "bdev_nvme_get_discovery_info", 00:04:33.599 "bdev_nvme_stop_discovery", 00:04:33.599 "bdev_nvme_start_discovery", 00:04:33.599 "bdev_nvme_get_controller_health_info", 00:04:33.599 "bdev_nvme_disable_controller", 00:04:33.599 "bdev_nvme_enable_controller", 00:04:33.599 "bdev_nvme_reset_controller", 00:04:33.599 "bdev_nvme_get_transport_statistics", 00:04:33.599 "bdev_nvme_apply_firmware", 00:04:33.599 "bdev_nvme_detach_controller", 00:04:33.599 "bdev_nvme_get_controllers", 00:04:33.599 "bdev_nvme_attach_controller", 00:04:33.599 "bdev_nvme_set_hotplug", 00:04:33.599 "bdev_nvme_set_options", 00:04:33.599 "bdev_passthru_delete", 00:04:33.599 "bdev_passthru_create", 00:04:33.599 "bdev_lvol_set_parent_bdev", 00:04:33.599 "bdev_lvol_set_parent", 00:04:33.599 "bdev_lvol_check_shallow_copy", 00:04:33.599 "bdev_lvol_start_shallow_copy", 00:04:33.599 "bdev_lvol_grow_lvstore", 00:04:33.599 "bdev_lvol_get_lvols", 00:04:33.599 "bdev_lvol_get_lvstores", 00:04:33.599 "bdev_lvol_delete", 00:04:33.599 "bdev_lvol_set_read_only", 00:04:33.599 "bdev_lvol_resize", 00:04:33.599 "bdev_lvol_decouple_parent", 00:04:33.599 "bdev_lvol_inflate", 00:04:33.599 "bdev_lvol_rename", 00:04:33.599 "bdev_lvol_clone_bdev", 00:04:33.599 "bdev_lvol_clone", 00:04:33.599 "bdev_lvol_snapshot", 00:04:33.599 "bdev_lvol_create", 00:04:33.599 "bdev_lvol_delete_lvstore", 00:04:33.599 "bdev_lvol_rename_lvstore", 00:04:33.599 "bdev_lvol_create_lvstore", 00:04:33.599 "bdev_raid_set_options", 00:04:33.599 "bdev_raid_remove_base_bdev", 00:04:33.599 "bdev_raid_add_base_bdev", 00:04:33.599 "bdev_raid_delete", 00:04:33.599 "bdev_raid_create", 00:04:33.599 "bdev_raid_get_bdevs", 00:04:33.599 "bdev_error_inject_error", 00:04:33.599 "bdev_error_delete", 00:04:33.599 "bdev_error_create", 00:04:33.599 "bdev_split_delete", 00:04:33.599 "bdev_split_create", 00:04:33.599 "bdev_delay_delete", 00:04:33.599 "bdev_delay_create", 00:04:33.599 "bdev_delay_update_latency", 00:04:33.599 "bdev_zone_block_delete", 00:04:33.599 "bdev_zone_block_create", 00:04:33.599 "blobfs_create", 00:04:33.599 "blobfs_detect", 00:04:33.599 "blobfs_set_cache_size", 00:04:33.599 "bdev_aio_delete", 00:04:33.599 "bdev_aio_rescan", 00:04:33.599 "bdev_aio_create", 00:04:33.599 "bdev_ftl_set_property", 00:04:33.599 "bdev_ftl_get_properties", 00:04:33.599 "bdev_ftl_get_stats", 00:04:33.599 "bdev_ftl_unmap", 00:04:33.599 "bdev_ftl_unload", 00:04:33.599 "bdev_ftl_delete", 00:04:33.599 "bdev_ftl_load", 00:04:33.599 "bdev_ftl_create", 00:04:33.599 "bdev_virtio_attach_controller", 00:04:33.599 "bdev_virtio_scsi_get_devices", 00:04:33.599 "bdev_virtio_detach_controller", 00:04:33.599 "bdev_virtio_blk_set_hotplug", 00:04:33.599 "bdev_iscsi_delete", 00:04:33.599 "bdev_iscsi_create", 00:04:33.599 "bdev_iscsi_set_options", 00:04:33.599 "bdev_uring_delete", 00:04:33.599 "bdev_uring_rescan", 00:04:33.599 "bdev_uring_create", 00:04:33.599 "accel_error_inject_error", 00:04:33.599 "ioat_scan_accel_module", 00:04:33.599 "dsa_scan_accel_module", 00:04:33.599 "iaa_scan_accel_module", 00:04:33.599 "keyring_file_remove_key", 00:04:33.599 "keyring_file_add_key", 00:04:33.599 "keyring_linux_set_options", 00:04:33.599 "fsdev_aio_delete", 00:04:33.599 "fsdev_aio_create", 00:04:33.599 "iscsi_get_histogram", 00:04:33.599 "iscsi_enable_histogram", 00:04:33.599 "iscsi_set_options", 00:04:33.599 "iscsi_get_auth_groups", 00:04:33.599 "iscsi_auth_group_remove_secret", 00:04:33.599 "iscsi_auth_group_add_secret", 00:04:33.599 "iscsi_delete_auth_group", 00:04:33.599 "iscsi_create_auth_group", 00:04:33.599 "iscsi_set_discovery_auth", 00:04:33.599 "iscsi_get_options", 00:04:33.599 "iscsi_target_node_request_logout", 00:04:33.599 "iscsi_target_node_set_redirect", 00:04:33.599 "iscsi_target_node_set_auth", 00:04:33.599 "iscsi_target_node_add_lun", 00:04:33.599 "iscsi_get_stats", 00:04:33.599 "iscsi_get_connections", 00:04:33.599 "iscsi_portal_group_set_auth", 00:04:33.599 "iscsi_start_portal_group", 00:04:33.599 "iscsi_delete_portal_group", 00:04:33.599 "iscsi_create_portal_group", 00:04:33.599 "iscsi_get_portal_groups", 00:04:33.599 "iscsi_delete_target_node", 00:04:33.599 "iscsi_target_node_remove_pg_ig_maps", 00:04:33.599 "iscsi_target_node_add_pg_ig_maps", 00:04:33.599 "iscsi_create_target_node", 00:04:33.599 "iscsi_get_target_nodes", 00:04:33.599 "iscsi_delete_initiator_group", 00:04:33.599 "iscsi_initiator_group_remove_initiators", 00:04:33.599 "iscsi_initiator_group_add_initiators", 00:04:33.599 "iscsi_create_initiator_group", 00:04:33.599 "iscsi_get_initiator_groups", 00:04:33.599 "nvmf_set_crdt", 00:04:33.599 "nvmf_set_config", 00:04:33.599 "nvmf_set_max_subsystems", 00:04:33.599 "nvmf_stop_mdns_prr", 00:04:33.599 "nvmf_publish_mdns_prr", 00:04:33.599 "nvmf_subsystem_get_listeners", 00:04:33.599 "nvmf_subsystem_get_qpairs", 00:04:33.599 "nvmf_subsystem_get_controllers", 00:04:33.599 "nvmf_get_stats", 00:04:33.599 "nvmf_get_transports", 00:04:33.599 "nvmf_create_transport", 00:04:33.599 "nvmf_get_targets", 00:04:33.599 "nvmf_delete_target", 00:04:33.599 "nvmf_create_target", 00:04:33.599 "nvmf_subsystem_allow_any_host", 00:04:33.599 "nvmf_subsystem_set_keys", 00:04:33.599 "nvmf_subsystem_remove_host", 00:04:33.599 "nvmf_subsystem_add_host", 00:04:33.599 "nvmf_ns_remove_host", 00:04:33.599 "nvmf_ns_add_host", 00:04:33.599 "nvmf_subsystem_remove_ns", 00:04:33.599 "nvmf_subsystem_set_ns_ana_group", 00:04:33.599 "nvmf_subsystem_add_ns", 00:04:33.599 "nvmf_subsystem_listener_set_ana_state", 00:04:33.599 "nvmf_discovery_get_referrals", 00:04:33.599 "nvmf_discovery_remove_referral", 00:04:33.599 "nvmf_discovery_add_referral", 00:04:33.599 "nvmf_subsystem_remove_listener", 00:04:33.599 "nvmf_subsystem_add_listener", 00:04:33.599 "nvmf_delete_subsystem", 00:04:33.599 "nvmf_create_subsystem", 00:04:33.599 "nvmf_get_subsystems", 00:04:33.599 "env_dpdk_get_mem_stats", 00:04:33.599 "nbd_get_disks", 00:04:33.600 "nbd_stop_disk", 00:04:33.600 "nbd_start_disk", 00:04:33.600 "ublk_recover_disk", 00:04:33.600 "ublk_get_disks", 00:04:33.600 "ublk_stop_disk", 00:04:33.600 "ublk_start_disk", 00:04:33.600 "ublk_destroy_target", 00:04:33.600 "ublk_create_target", 00:04:33.600 "virtio_blk_create_transport", 00:04:33.600 "virtio_blk_get_transports", 00:04:33.600 "vhost_controller_set_coalescing", 00:04:33.600 "vhost_get_controllers", 00:04:33.600 "vhost_delete_controller", 00:04:33.600 "vhost_create_blk_controller", 00:04:33.600 "vhost_scsi_controller_remove_target", 00:04:33.600 "vhost_scsi_controller_add_target", 00:04:33.600 "vhost_start_scsi_controller", 00:04:33.600 "vhost_create_scsi_controller", 00:04:33.600 "thread_set_cpumask", 00:04:33.600 "scheduler_set_options", 00:04:33.600 "framework_get_governor", 00:04:33.600 "framework_get_scheduler", 00:04:33.600 "framework_set_scheduler", 00:04:33.600 "framework_get_reactors", 00:04:33.600 "thread_get_io_channels", 00:04:33.600 "thread_get_pollers", 00:04:33.600 "thread_get_stats", 00:04:33.600 "framework_monitor_context_switch", 00:04:33.600 "spdk_kill_instance", 00:04:33.600 "log_enable_timestamps", 00:04:33.600 "log_get_flags", 00:04:33.600 "log_clear_flag", 00:04:33.600 "log_set_flag", 00:04:33.600 "log_get_level", 00:04:33.600 "log_set_level", 00:04:33.600 "log_get_print_level", 00:04:33.600 "log_set_print_level", 00:04:33.600 "framework_enable_cpumask_locks", 00:04:33.600 "framework_disable_cpumask_locks", 00:04:33.600 "framework_wait_init", 00:04:33.600 "framework_start_init", 00:04:33.600 "scsi_get_devices", 00:04:33.600 "bdev_get_histogram", 00:04:33.600 "bdev_enable_histogram", 00:04:33.600 "bdev_set_qos_limit", 00:04:33.600 "bdev_set_qd_sampling_period", 00:04:33.600 "bdev_get_bdevs", 00:04:33.600 "bdev_reset_iostat", 00:04:33.600 "bdev_get_iostat", 00:04:33.600 "bdev_examine", 00:04:33.600 "bdev_wait_for_examine", 00:04:33.600 "bdev_set_options", 00:04:33.600 "accel_get_stats", 00:04:33.600 "accel_set_options", 00:04:33.600 "accel_set_driver", 00:04:33.600 "accel_crypto_key_destroy", 00:04:33.600 "accel_crypto_keys_get", 00:04:33.600 "accel_crypto_key_create", 00:04:33.600 "accel_assign_opc", 00:04:33.600 "accel_get_module_info", 00:04:33.600 "accel_get_opc_assignments", 00:04:33.600 "vmd_rescan", 00:04:33.600 "vmd_remove_device", 00:04:33.600 "vmd_enable", 00:04:33.600 "sock_get_default_impl", 00:04:33.600 "sock_set_default_impl", 00:04:33.600 "sock_impl_set_options", 00:04:33.600 "sock_impl_get_options", 00:04:33.600 "iobuf_get_stats", 00:04:33.600 "iobuf_set_options", 00:04:33.600 "keyring_get_keys", 00:04:33.600 "framework_get_pci_devices", 00:04:33.600 "framework_get_config", 00:04:33.600 "framework_get_subsystems", 00:04:33.600 "fsdev_set_opts", 00:04:33.600 "fsdev_get_opts", 00:04:33.600 "trace_get_info", 00:04:33.600 "trace_get_tpoint_group_mask", 00:04:33.600 "trace_disable_tpoint_group", 00:04:33.600 "trace_enable_tpoint_group", 00:04:33.600 "trace_clear_tpoint_mask", 00:04:33.600 "trace_set_tpoint_mask", 00:04:33.600 "notify_get_notifications", 00:04:33.600 "notify_get_types", 00:04:33.600 "spdk_get_version", 00:04:33.600 "rpc_get_methods" 00:04:33.600 ] 00:04:33.600 19:38:28 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:33.600 19:38:28 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:33.600 19:38:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:33.600 19:38:28 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:33.600 19:38:28 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57133 00:04:33.600 19:38:28 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57133 ']' 00:04:33.600 19:38:28 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57133 00:04:33.600 19:38:28 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:33.600 19:38:28 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:33.600 19:38:28 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57133 00:04:33.600 killing process with pid 57133 00:04:33.600 19:38:28 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:33.600 19:38:28 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:33.600 19:38:28 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57133' 00:04:33.600 19:38:28 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57133 00:04:33.600 19:38:28 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57133 00:04:33.858 ************************************ 00:04:33.858 END TEST spdkcli_tcp 00:04:33.858 ************************************ 00:04:33.858 00:04:33.858 real 0m1.485s 00:04:33.858 user 0m2.735s 00:04:33.858 sys 0m0.325s 00:04:33.858 19:38:28 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.858 19:38:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:33.858 19:38:28 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:33.858 19:38:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.858 19:38:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.858 19:38:28 -- common/autotest_common.sh@10 -- # set +x 00:04:33.858 ************************************ 00:04:33.858 START TEST dpdk_mem_utility 00:04:33.858 ************************************ 00:04:33.858 19:38:28 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:33.858 * Looking for test storage... 00:04:33.858 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:33.858 19:38:28 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:33.858 19:38:28 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:33.858 19:38:28 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:33.858 19:38:29 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:33.858 19:38:29 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:33.858 19:38:29 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:33.858 19:38:29 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:33.858 19:38:29 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:33.858 19:38:29 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:33.858 19:38:29 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:33.858 19:38:29 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:33.858 19:38:29 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:33.858 19:38:29 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:33.858 19:38:29 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:33.858 19:38:29 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:33.858 19:38:29 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:33.858 19:38:29 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:33.858 19:38:29 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:33.858 19:38:29 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:33.858 19:38:29 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:33.858 19:38:29 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:33.858 19:38:29 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:33.858 19:38:29 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:33.858 19:38:29 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:33.858 19:38:29 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:33.858 19:38:29 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:33.858 19:38:29 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:33.858 19:38:29 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:33.858 19:38:29 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:33.858 19:38:29 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:33.858 19:38:29 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:33.858 19:38:29 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:33.858 19:38:29 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:33.858 19:38:29 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:33.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.858 --rc genhtml_branch_coverage=1 00:04:33.858 --rc genhtml_function_coverage=1 00:04:33.858 --rc genhtml_legend=1 00:04:33.858 --rc geninfo_all_blocks=1 00:04:33.858 --rc geninfo_unexecuted_blocks=1 00:04:33.858 00:04:33.858 ' 00:04:33.858 19:38:29 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:33.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.858 --rc genhtml_branch_coverage=1 00:04:33.858 --rc genhtml_function_coverage=1 00:04:33.858 --rc genhtml_legend=1 00:04:33.858 --rc geninfo_all_blocks=1 00:04:33.858 --rc geninfo_unexecuted_blocks=1 00:04:33.858 00:04:33.858 ' 00:04:33.858 19:38:29 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:33.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.858 --rc genhtml_branch_coverage=1 00:04:33.858 --rc genhtml_function_coverage=1 00:04:33.858 --rc genhtml_legend=1 00:04:33.858 --rc geninfo_all_blocks=1 00:04:33.858 --rc geninfo_unexecuted_blocks=1 00:04:33.858 00:04:33.858 ' 00:04:33.858 19:38:29 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:33.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.858 --rc genhtml_branch_coverage=1 00:04:33.858 --rc genhtml_function_coverage=1 00:04:33.858 --rc genhtml_legend=1 00:04:33.858 --rc geninfo_all_blocks=1 00:04:33.858 --rc geninfo_unexecuted_blocks=1 00:04:33.858 00:04:33.858 ' 00:04:33.858 19:38:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:33.858 19:38:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57227 00:04:33.858 19:38:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57227 00:04:33.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.858 19:38:29 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57227 ']' 00:04:33.858 19:38:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:33.858 19:38:29 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.858 19:38:29 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:33.858 19:38:29 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.858 19:38:29 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:33.858 19:38:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:34.115 [2024-11-26 19:38:29.105181] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:04:34.115 [2024-11-26 19:38:29.105458] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57227 ] 00:04:34.115 [2024-11-26 19:38:29.237282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.115 [2024-11-26 19:38:29.272495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.116 [2024-11-26 19:38:29.315226] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:35.049 19:38:29 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:35.049 19:38:29 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:35.049 19:38:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:35.049 19:38:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:35.049 19:38:29 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.049 19:38:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:35.049 { 00:04:35.049 "filename": "/tmp/spdk_mem_dump.txt" 00:04:35.049 } 00:04:35.049 19:38:29 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.049 19:38:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:35.049 DPDK memory size 818.000000 MiB in 1 heap(s) 00:04:35.049 1 heaps totaling size 818.000000 MiB 00:04:35.049 size: 818.000000 MiB heap id: 0 00:04:35.049 end heaps---------- 00:04:35.049 9 mempools totaling size 603.782043 MiB 00:04:35.049 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:35.049 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:35.049 size: 100.555481 MiB name: bdev_io_57227 00:04:35.049 size: 50.003479 MiB name: msgpool_57227 00:04:35.049 size: 36.509338 MiB name: fsdev_io_57227 00:04:35.049 size: 21.763794 MiB name: PDU_Pool 00:04:35.049 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:35.049 size: 4.133484 MiB name: evtpool_57227 00:04:35.049 size: 0.026123 MiB name: Session_Pool 00:04:35.049 end mempools------- 00:04:35.049 6 memzones totaling size 4.142822 MiB 00:04:35.049 size: 1.000366 MiB name: RG_ring_0_57227 00:04:35.049 size: 1.000366 MiB name: RG_ring_1_57227 00:04:35.049 size: 1.000366 MiB name: RG_ring_4_57227 00:04:35.049 size: 1.000366 MiB name: RG_ring_5_57227 00:04:35.049 size: 0.125366 MiB name: RG_ring_2_57227 00:04:35.049 size: 0.015991 MiB name: RG_ring_3_57227 00:04:35.049 end memzones------- 00:04:35.049 19:38:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:35.049 heap id: 0 total size: 818.000000 MiB number of busy elements: 314 number of free elements: 15 00:04:35.050 list of free elements. size: 10.803040 MiB 00:04:35.050 element at address: 0x200019200000 with size: 0.999878 MiB 00:04:35.050 element at address: 0x200019400000 with size: 0.999878 MiB 00:04:35.050 element at address: 0x200032000000 with size: 0.994446 MiB 00:04:35.050 element at address: 0x200000400000 with size: 0.993958 MiB 00:04:35.050 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:35.050 element at address: 0x200012c00000 with size: 0.944275 MiB 00:04:35.050 element at address: 0x200019600000 with size: 0.936584 MiB 00:04:35.050 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:35.050 element at address: 0x20001ae00000 with size: 0.568237 MiB 00:04:35.050 element at address: 0x20000a600000 with size: 0.488892 MiB 00:04:35.050 element at address: 0x200000c00000 with size: 0.486267 MiB 00:04:35.050 element at address: 0x200019800000 with size: 0.485657 MiB 00:04:35.050 element at address: 0x200003e00000 with size: 0.480286 MiB 00:04:35.050 element at address: 0x200028200000 with size: 0.395752 MiB 00:04:35.050 element at address: 0x200000800000 with size: 0.351746 MiB 00:04:35.050 list of standard malloc elements. size: 199.268066 MiB 00:04:35.050 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:35.050 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:35.050 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:35.050 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:04:35.050 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:04:35.050 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:35.050 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:04:35.050 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:35.050 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:04:35.050 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:35.050 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:35.050 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:04:35.050 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:04:35.050 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:04:35.050 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:04:35.050 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:04:35.050 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:04:35.050 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:04:35.050 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:04:35.050 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:04:35.050 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:04:35.050 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:04:35.050 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:04:35.050 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:04:35.050 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:04:35.050 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:04:35.050 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:04:35.050 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:04:35.050 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:04:35.050 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:04:35.050 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:04:35.050 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:04:35.050 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:04:35.050 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:04:35.050 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:04:35.050 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:04:35.050 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:35.050 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:35.050 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:04:35.050 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:35.050 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:35.050 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:04:35.050 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:04:35.050 element at address: 0x20000085e580 with size: 0.000183 MiB 00:04:35.050 element at address: 0x20000087e840 with size: 0.000183 MiB 00:04:35.050 element at address: 0x20000087e900 with size: 0.000183 MiB 00:04:35.050 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:04:35.050 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:04:35.050 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:04:35.050 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:04:35.050 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:04:35.050 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:04:35.050 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:04:35.050 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:04:35.050 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:04:35.050 element at address: 0x20000087f080 with size: 0.000183 MiB 00:04:35.050 element at address: 0x20000087f140 with size: 0.000183 MiB 00:04:35.050 element at address: 0x20000087f200 with size: 0.000183 MiB 00:04:35.050 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:04:35.050 element at address: 0x20000087f380 with size: 0.000183 MiB 00:04:35.050 element at address: 0x20000087f440 with size: 0.000183 MiB 00:04:35.050 element at address: 0x20000087f500 with size: 0.000183 MiB 00:04:35.050 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:35.050 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:35.050 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:35.050 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200000c7c7c0 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200000c7c880 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200000c7c940 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200000c7ca00 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:35.050 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:35.050 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:35.050 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:04:35.050 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:04:35.050 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:04:35.050 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:04:35.050 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:35.051 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:04:35.051 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:04:35.051 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:04:35.051 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae91780 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae91840 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae91900 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae919c0 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae91a80 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae91b40 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae91c00 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae91cc0 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae91d80 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae91e40 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae91f00 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae91fc0 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae92080 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae92140 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae92200 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae922c0 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae92380 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae92440 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:04:35.051 element at address: 0x200028265500 with size: 0.000183 MiB 00:04:35.051 element at address: 0x2000282655c0 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20002826c1c0 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20002826c3c0 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20002826c480 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20002826c540 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20002826c600 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20002826c780 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20002826c840 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20002826c900 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20002826d080 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20002826d140 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20002826d200 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20002826d380 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20002826d440 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20002826d500 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20002826d680 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20002826d740 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20002826d800 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20002826d980 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20002826da40 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20002826db00 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20002826de00 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20002826df80 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20002826e040 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20002826e100 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20002826e280 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20002826e340 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20002826e400 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20002826e580 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20002826e640 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20002826e700 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20002826e880 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20002826e940 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:04:35.051 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:04:35.052 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:04:35.052 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:04:35.052 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:04:35.052 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:04:35.052 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:04:35.052 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:04:35.052 element at address: 0x20002826f000 with size: 0.000183 MiB 00:04:35.052 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:04:35.052 element at address: 0x20002826f180 with size: 0.000183 MiB 00:04:35.052 element at address: 0x20002826f240 with size: 0.000183 MiB 00:04:35.052 element at address: 0x20002826f300 with size: 0.000183 MiB 00:04:35.052 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:04:35.052 element at address: 0x20002826f480 with size: 0.000183 MiB 00:04:35.052 element at address: 0x20002826f540 with size: 0.000183 MiB 00:04:35.052 element at address: 0x20002826f600 with size: 0.000183 MiB 00:04:35.052 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:04:35.052 element at address: 0x20002826f780 with size: 0.000183 MiB 00:04:35.052 element at address: 0x20002826f840 with size: 0.000183 MiB 00:04:35.052 element at address: 0x20002826f900 with size: 0.000183 MiB 00:04:35.052 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:04:35.052 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:04:35.052 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:04:35.052 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:04:35.052 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:04:35.052 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:04:35.052 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:04:35.052 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:04:35.052 list of memzone associated elements. size: 607.928894 MiB 00:04:35.052 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:04:35.052 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:35.052 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:04:35.052 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:35.052 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:04:35.052 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_57227_0 00:04:35.052 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:35.052 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57227_0 00:04:35.052 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:35.052 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57227_0 00:04:35.052 element at address: 0x2000199be940 with size: 20.255554 MiB 00:04:35.052 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:35.052 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:04:35.052 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:35.052 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:35.052 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57227_0 00:04:35.052 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:35.052 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57227 00:04:35.052 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:35.052 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57227 00:04:35.052 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:35.052 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:35.052 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:04:35.052 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:35.052 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:35.052 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:35.052 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:35.052 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:35.052 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:35.052 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57227 00:04:35.052 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:35.052 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57227 00:04:35.052 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:04:35.052 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57227 00:04:35.052 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:04:35.052 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57227 00:04:35.052 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:35.052 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57227 00:04:35.052 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:35.052 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57227 00:04:35.052 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:35.052 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:35.052 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:35.052 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:35.052 element at address: 0x20001987c540 with size: 0.250488 MiB 00:04:35.052 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:35.052 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:35.052 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57227 00:04:35.052 element at address: 0x20000085e640 with size: 0.125488 MiB 00:04:35.052 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57227 00:04:35.052 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:35.052 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:35.052 element at address: 0x200028265680 with size: 0.023743 MiB 00:04:35.052 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:35.052 element at address: 0x20000085a380 with size: 0.016113 MiB 00:04:35.052 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57227 00:04:35.052 element at address: 0x20002826b7c0 with size: 0.002441 MiB 00:04:35.052 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:35.052 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:04:35.052 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57227 00:04:35.052 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:35.052 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57227 00:04:35.052 element at address: 0x20000085a180 with size: 0.000305 MiB 00:04:35.052 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57227 00:04:35.052 element at address: 0x20002826c280 with size: 0.000305 MiB 00:04:35.052 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:35.052 19:38:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:35.052 19:38:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57227 00:04:35.052 19:38:30 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57227 ']' 00:04:35.052 19:38:30 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57227 00:04:35.052 19:38:30 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:35.052 19:38:30 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:35.052 19:38:30 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57227 00:04:35.052 19:38:30 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:35.052 19:38:30 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:35.052 19:38:30 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57227' 00:04:35.052 killing process with pid 57227 00:04:35.052 19:38:30 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57227 00:04:35.052 19:38:30 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57227 00:04:35.310 ************************************ 00:04:35.310 END TEST dpdk_mem_utility 00:04:35.310 ************************************ 00:04:35.310 00:04:35.310 real 0m1.390s 00:04:35.310 user 0m1.511s 00:04:35.310 sys 0m0.294s 00:04:35.310 19:38:30 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.310 19:38:30 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:35.310 19:38:30 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:35.310 19:38:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.310 19:38:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.310 19:38:30 -- common/autotest_common.sh@10 -- # set +x 00:04:35.310 ************************************ 00:04:35.310 START TEST event 00:04:35.310 ************************************ 00:04:35.310 19:38:30 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:35.310 * Looking for test storage... 00:04:35.310 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:35.310 19:38:30 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:35.310 19:38:30 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:35.310 19:38:30 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:35.310 19:38:30 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:35.310 19:38:30 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:35.310 19:38:30 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:35.310 19:38:30 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:35.310 19:38:30 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:35.310 19:38:30 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:35.310 19:38:30 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:35.310 19:38:30 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:35.310 19:38:30 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:35.310 19:38:30 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:35.310 19:38:30 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:35.310 19:38:30 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:35.310 19:38:30 event -- scripts/common.sh@344 -- # case "$op" in 00:04:35.310 19:38:30 event -- scripts/common.sh@345 -- # : 1 00:04:35.310 19:38:30 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:35.310 19:38:30 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.310 19:38:30 event -- scripts/common.sh@365 -- # decimal 1 00:04:35.310 19:38:30 event -- scripts/common.sh@353 -- # local d=1 00:04:35.310 19:38:30 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.310 19:38:30 event -- scripts/common.sh@355 -- # echo 1 00:04:35.310 19:38:30 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:35.310 19:38:30 event -- scripts/common.sh@366 -- # decimal 2 00:04:35.310 19:38:30 event -- scripts/common.sh@353 -- # local d=2 00:04:35.310 19:38:30 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.310 19:38:30 event -- scripts/common.sh@355 -- # echo 2 00:04:35.310 19:38:30 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:35.310 19:38:30 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:35.310 19:38:30 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:35.310 19:38:30 event -- scripts/common.sh@368 -- # return 0 00:04:35.310 19:38:30 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.310 19:38:30 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:35.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.310 --rc genhtml_branch_coverage=1 00:04:35.310 --rc genhtml_function_coverage=1 00:04:35.310 --rc genhtml_legend=1 00:04:35.310 --rc geninfo_all_blocks=1 00:04:35.310 --rc geninfo_unexecuted_blocks=1 00:04:35.310 00:04:35.310 ' 00:04:35.310 19:38:30 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:35.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.310 --rc genhtml_branch_coverage=1 00:04:35.310 --rc genhtml_function_coverage=1 00:04:35.310 --rc genhtml_legend=1 00:04:35.310 --rc geninfo_all_blocks=1 00:04:35.310 --rc geninfo_unexecuted_blocks=1 00:04:35.310 00:04:35.310 ' 00:04:35.310 19:38:30 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:35.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.310 --rc genhtml_branch_coverage=1 00:04:35.310 --rc genhtml_function_coverage=1 00:04:35.310 --rc genhtml_legend=1 00:04:35.310 --rc geninfo_all_blocks=1 00:04:35.310 --rc geninfo_unexecuted_blocks=1 00:04:35.310 00:04:35.310 ' 00:04:35.310 19:38:30 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:35.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.310 --rc genhtml_branch_coverage=1 00:04:35.310 --rc genhtml_function_coverage=1 00:04:35.310 --rc genhtml_legend=1 00:04:35.310 --rc geninfo_all_blocks=1 00:04:35.310 --rc geninfo_unexecuted_blocks=1 00:04:35.310 00:04:35.310 ' 00:04:35.310 19:38:30 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:35.310 19:38:30 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:35.310 19:38:30 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:35.310 19:38:30 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:35.310 19:38:30 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.310 19:38:30 event -- common/autotest_common.sh@10 -- # set +x 00:04:35.310 ************************************ 00:04:35.310 START TEST event_perf 00:04:35.310 ************************************ 00:04:35.310 19:38:30 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:35.310 Running I/O for 1 seconds...[2024-11-26 19:38:30.516015] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:04:35.310 [2024-11-26 19:38:30.516416] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57301 ] 00:04:35.574 [2024-11-26 19:38:30.655800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:35.574 [2024-11-26 19:38:30.694387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:35.574 [2024-11-26 19:38:30.694631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:35.574 [2024-11-26 19:38:30.694683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:35.574 [2024-11-26 19:38:30.694689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.518 Running I/O for 1 seconds... 00:04:36.518 lcore 0: 178465 00:04:36.518 lcore 1: 178467 00:04:36.518 lcore 2: 178465 00:04:36.518 lcore 3: 178464 00:04:36.518 done. 00:04:36.518 00:04:36.519 real 0m1.228s 00:04:36.519 user 0m4.076s 00:04:36.519 sys 0m0.031s 00:04:36.519 19:38:31 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.519 19:38:31 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:36.519 ************************************ 00:04:36.519 END TEST event_perf 00:04:36.519 ************************************ 00:04:36.519 19:38:31 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:36.519 19:38:31 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:36.519 19:38:31 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.519 19:38:31 event -- common/autotest_common.sh@10 -- # set +x 00:04:36.776 ************************************ 00:04:36.776 START TEST event_reactor 00:04:36.776 ************************************ 00:04:36.776 19:38:31 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:36.776 [2024-11-26 19:38:31.783963] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:04:36.776 [2024-11-26 19:38:31.784128] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57339 ] 00:04:36.776 [2024-11-26 19:38:31.917502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.776 [2024-11-26 19:38:31.954404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.149 test_start 00:04:38.149 oneshot 00:04:38.149 tick 100 00:04:38.149 tick 100 00:04:38.149 tick 250 00:04:38.149 tick 100 00:04:38.149 tick 100 00:04:38.149 tick 100 00:04:38.149 tick 250 00:04:38.149 tick 500 00:04:38.149 tick 100 00:04:38.149 tick 100 00:04:38.149 tick 250 00:04:38.149 tick 100 00:04:38.149 tick 100 00:04:38.149 test_end 00:04:38.149 00:04:38.149 real 0m1.216s 00:04:38.149 user 0m1.084s 00:04:38.149 sys 0m0.026s 00:04:38.149 ************************************ 00:04:38.149 END TEST event_reactor 00:04:38.149 ************************************ 00:04:38.149 19:38:32 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.149 19:38:32 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:38.149 19:38:33 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:38.149 19:38:33 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:38.149 19:38:33 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.149 19:38:33 event -- common/autotest_common.sh@10 -- # set +x 00:04:38.149 ************************************ 00:04:38.149 START TEST event_reactor_perf 00:04:38.149 ************************************ 00:04:38.149 19:38:33 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:38.149 [2024-11-26 19:38:33.043952] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:04:38.149 [2024-11-26 19:38:33.044141] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57375 ] 00:04:38.149 [2024-11-26 19:38:33.183754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.149 [2024-11-26 19:38:33.219839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.081 test_start 00:04:39.081 test_end 00:04:39.081 Performance: 389087 events per second 00:04:39.081 00:04:39.081 real 0m1.224s 00:04:39.081 user 0m1.093s 00:04:39.081 sys 0m0.024s 00:04:39.081 19:38:34 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.081 19:38:34 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:39.081 ************************************ 00:04:39.081 END TEST event_reactor_perf 00:04:39.081 ************************************ 00:04:39.081 19:38:34 event -- event/event.sh@49 -- # uname -s 00:04:39.081 19:38:34 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:39.081 19:38:34 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:39.081 19:38:34 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.081 19:38:34 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.081 19:38:34 event -- common/autotest_common.sh@10 -- # set +x 00:04:39.081 ************************************ 00:04:39.081 START TEST event_scheduler 00:04:39.081 ************************************ 00:04:39.081 19:38:34 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:39.338 * Looking for test storage... 00:04:39.338 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:39.338 19:38:34 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:39.338 19:38:34 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:04:39.338 19:38:34 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:39.338 19:38:34 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:39.338 19:38:34 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:39.338 19:38:34 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:39.338 19:38:34 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:39.338 19:38:34 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.338 19:38:34 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:39.338 19:38:34 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:39.338 19:38:34 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:39.338 19:38:34 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:39.338 19:38:34 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:39.338 19:38:34 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:39.338 19:38:34 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:39.338 19:38:34 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:39.338 19:38:34 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:39.338 19:38:34 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:39.338 19:38:34 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.338 19:38:34 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:39.338 19:38:34 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:39.338 19:38:34 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.338 19:38:34 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:39.338 19:38:34 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:39.338 19:38:34 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:39.338 19:38:34 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:39.338 19:38:34 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.338 19:38:34 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:39.338 19:38:34 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:39.338 19:38:34 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:39.338 19:38:34 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:39.338 19:38:34 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:39.339 19:38:34 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.339 19:38:34 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:39.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.339 --rc genhtml_branch_coverage=1 00:04:39.339 --rc genhtml_function_coverage=1 00:04:39.339 --rc genhtml_legend=1 00:04:39.339 --rc geninfo_all_blocks=1 00:04:39.339 --rc geninfo_unexecuted_blocks=1 00:04:39.339 00:04:39.339 ' 00:04:39.339 19:38:34 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:39.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.339 --rc genhtml_branch_coverage=1 00:04:39.339 --rc genhtml_function_coverage=1 00:04:39.339 --rc genhtml_legend=1 00:04:39.339 --rc geninfo_all_blocks=1 00:04:39.339 --rc geninfo_unexecuted_blocks=1 00:04:39.339 00:04:39.339 ' 00:04:39.339 19:38:34 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:39.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.339 --rc genhtml_branch_coverage=1 00:04:39.339 --rc genhtml_function_coverage=1 00:04:39.339 --rc genhtml_legend=1 00:04:39.339 --rc geninfo_all_blocks=1 00:04:39.339 --rc geninfo_unexecuted_blocks=1 00:04:39.339 00:04:39.339 ' 00:04:39.339 19:38:34 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:39.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.339 --rc genhtml_branch_coverage=1 00:04:39.339 --rc genhtml_function_coverage=1 00:04:39.339 --rc genhtml_legend=1 00:04:39.339 --rc geninfo_all_blocks=1 00:04:39.339 --rc geninfo_unexecuted_blocks=1 00:04:39.339 00:04:39.339 ' 00:04:39.339 19:38:34 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:39.339 19:38:34 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=57439 00:04:39.339 19:38:34 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:39.339 19:38:34 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 57439 00:04:39.339 19:38:34 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 57439 ']' 00:04:39.339 19:38:34 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.339 19:38:34 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:39.339 19:38:34 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:39.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.339 19:38:34 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.339 19:38:34 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:39.339 19:38:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:39.339 [2024-11-26 19:38:34.474973] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:04:39.339 [2024-11-26 19:38:34.475125] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57439 ] 00:04:39.596 [2024-11-26 19:38:34.617270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:39.596 [2024-11-26 19:38:34.657576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.596 [2024-11-26 19:38:34.657851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:39.596 [2024-11-26 19:38:34.658000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:39.596 [2024-11-26 19:38:34.658421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:40.190 19:38:35 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:40.190 19:38:35 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:40.190 19:38:35 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:40.190 19:38:35 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.190 19:38:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:40.190 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:40.190 POWER: Cannot set governor of lcore 0 to userspace 00:04:40.190 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:40.190 POWER: Cannot set governor of lcore 0 to performance 00:04:40.190 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:40.190 POWER: Cannot set governor of lcore 0 to userspace 00:04:40.190 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:40.190 POWER: Cannot set governor of lcore 0 to userspace 00:04:40.190 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:40.190 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:40.190 POWER: Unable to set Power Management Environment for lcore 0 00:04:40.190 [2024-11-26 19:38:35.355012] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:04:40.190 [2024-11-26 19:38:35.355023] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:04:40.191 [2024-11-26 19:38:35.355028] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:40.191 [2024-11-26 19:38:35.355036] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:40.191 [2024-11-26 19:38:35.355041] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:40.191 [2024-11-26 19:38:35.355045] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:40.191 19:38:35 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.191 19:38:35 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:40.191 19:38:35 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.191 19:38:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:40.191 [2024-11-26 19:38:35.394042] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:40.191 [2024-11-26 19:38:35.418056] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:40.191 19:38:35 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.191 19:38:35 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:40.191 19:38:35 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.191 19:38:35 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.191 19:38:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:40.191 ************************************ 00:04:40.191 START TEST scheduler_create_thread 00:04:40.191 ************************************ 00:04:40.191 19:38:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:40.191 19:38:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:40.191 19:38:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.191 19:38:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.448 2 00:04:40.448 19:38:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.448 19:38:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:40.449 19:38:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.449 19:38:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.449 3 00:04:40.449 19:38:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.449 19:38:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:40.449 19:38:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.449 19:38:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.449 4 00:04:40.449 19:38:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.449 19:38:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:40.449 19:38:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.449 19:38:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.449 5 00:04:40.449 19:38:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.449 19:38:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:40.449 19:38:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.449 19:38:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.449 6 00:04:40.449 19:38:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.449 19:38:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:40.449 19:38:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.449 19:38:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.449 7 00:04:40.449 19:38:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.449 19:38:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:40.449 19:38:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.449 19:38:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.449 8 00:04:40.449 19:38:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.449 19:38:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:40.449 19:38:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.449 19:38:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.449 9 00:04:40.449 19:38:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.449 19:38:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:40.449 19:38:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.449 19:38:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.449 10 00:04:40.449 19:38:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.449 19:38:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:40.449 19:38:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.449 19:38:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.449 19:38:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.449 19:38:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:40.449 19:38:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:40.449 19:38:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.449 19:38:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.449 19:38:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.449 19:38:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:40.449 19:38:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.449 19:38:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.449 19:38:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.449 19:38:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:40.449 19:38:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:40.449 19:38:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.449 19:38:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:41.382 19:38:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.382 00:04:41.382 real 0m1.172s 00:04:41.382 user 0m0.014s 00:04:41.382 sys 0m0.005s 00:04:41.382 19:38:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.382 19:38:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:41.382 ************************************ 00:04:41.382 END TEST scheduler_create_thread 00:04:41.382 ************************************ 00:04:41.639 19:38:36 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:41.639 19:38:36 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 57439 00:04:41.639 19:38:36 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 57439 ']' 00:04:41.639 19:38:36 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 57439 00:04:41.639 19:38:36 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:41.639 19:38:36 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:41.639 19:38:36 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57439 00:04:41.639 killing process with pid 57439 00:04:41.639 19:38:36 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:41.639 19:38:36 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:41.639 19:38:36 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57439' 00:04:41.639 19:38:36 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 57439 00:04:41.639 19:38:36 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 57439 00:04:41.896 [2024-11-26 19:38:37.079746] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:42.153 00:04:42.154 real 0m2.882s 00:04:42.154 user 0m5.146s 00:04:42.154 sys 0m0.294s 00:04:42.154 19:38:37 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.154 19:38:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:42.154 ************************************ 00:04:42.154 END TEST event_scheduler 00:04:42.154 ************************************ 00:04:42.154 19:38:37 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:42.154 19:38:37 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:42.154 19:38:37 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.154 19:38:37 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.154 19:38:37 event -- common/autotest_common.sh@10 -- # set +x 00:04:42.154 ************************************ 00:04:42.154 START TEST app_repeat 00:04:42.154 ************************************ 00:04:42.154 19:38:37 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:42.154 19:38:37 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:42.154 19:38:37 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:42.154 19:38:37 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:42.154 19:38:37 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:42.154 19:38:37 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:42.154 19:38:37 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:42.154 19:38:37 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:42.154 Process app_repeat pid: 57522 00:04:42.154 spdk_app_start Round 0 00:04:42.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:42.154 19:38:37 event.app_repeat -- event/event.sh@19 -- # repeat_pid=57522 00:04:42.154 19:38:37 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:42.154 19:38:37 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 57522' 00:04:42.154 19:38:37 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:42.154 19:38:37 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:42.154 19:38:37 event.app_repeat -- event/event.sh@25 -- # waitforlisten 57522 /var/tmp/spdk-nbd.sock 00:04:42.154 19:38:37 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 57522 ']' 00:04:42.154 19:38:37 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:42.154 19:38:37 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:42.154 19:38:37 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:42.154 19:38:37 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:42.154 19:38:37 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:42.154 19:38:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:42.154 [2024-11-26 19:38:37.249500] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:04:42.154 [2024-11-26 19:38:37.249682] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57522 ] 00:04:42.154 [2024-11-26 19:38:37.390632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:42.411 [2024-11-26 19:38:37.429074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.411 [2024-11-26 19:38:37.429087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.411 [2024-11-26 19:38:37.462679] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:42.973 19:38:38 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:42.973 19:38:38 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:42.973 19:38:38 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:43.292 Malloc0 00:04:43.292 19:38:38 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:43.603 Malloc1 00:04:43.603 19:38:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:43.603 19:38:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:43.603 19:38:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:43.603 19:38:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:43.603 19:38:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:43.603 19:38:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:43.603 19:38:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:43.603 19:38:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:43.603 19:38:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:43.603 19:38:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:43.603 19:38:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:43.603 19:38:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:43.603 19:38:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:43.603 19:38:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:43.603 19:38:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:43.603 19:38:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:43.603 /dev/nbd0 00:04:43.603 19:38:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:43.603 19:38:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:43.603 19:38:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:43.603 19:38:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:43.603 19:38:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:43.603 19:38:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:43.603 19:38:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:43.603 19:38:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:43.603 19:38:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:43.603 19:38:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:43.603 19:38:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:43.603 1+0 records in 00:04:43.603 1+0 records out 00:04:43.603 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025474 s, 16.1 MB/s 00:04:43.603 19:38:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:43.603 19:38:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:43.603 19:38:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:43.603 19:38:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:43.603 19:38:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:43.603 19:38:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:43.603 19:38:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:43.603 19:38:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:43.861 /dev/nbd1 00:04:43.861 19:38:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:43.861 19:38:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:43.861 19:38:39 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:43.861 19:38:39 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:43.861 19:38:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:43.861 19:38:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:43.861 19:38:39 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:43.861 19:38:39 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:43.861 19:38:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:43.861 19:38:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:43.861 19:38:39 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:43.861 1+0 records in 00:04:43.861 1+0 records out 00:04:43.861 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000232424 s, 17.6 MB/s 00:04:43.861 19:38:39 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:43.861 19:38:39 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:43.861 19:38:39 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:43.861 19:38:39 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:43.861 19:38:39 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:43.861 19:38:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:43.861 19:38:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:43.861 19:38:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:43.861 19:38:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:43.861 19:38:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:44.118 19:38:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:44.118 { 00:04:44.118 "nbd_device": "/dev/nbd0", 00:04:44.118 "bdev_name": "Malloc0" 00:04:44.118 }, 00:04:44.118 { 00:04:44.118 "nbd_device": "/dev/nbd1", 00:04:44.118 "bdev_name": "Malloc1" 00:04:44.118 } 00:04:44.118 ]' 00:04:44.118 19:38:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:44.118 { 00:04:44.118 "nbd_device": "/dev/nbd0", 00:04:44.118 "bdev_name": "Malloc0" 00:04:44.118 }, 00:04:44.118 { 00:04:44.118 "nbd_device": "/dev/nbd1", 00:04:44.118 "bdev_name": "Malloc1" 00:04:44.118 } 00:04:44.118 ]' 00:04:44.118 19:38:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:44.118 19:38:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:44.118 /dev/nbd1' 00:04:44.118 19:38:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:44.118 /dev/nbd1' 00:04:44.118 19:38:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:44.118 19:38:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:44.118 19:38:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:44.118 19:38:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:44.118 19:38:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:44.118 19:38:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:44.118 19:38:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.118 19:38:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:44.118 19:38:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:44.119 19:38:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:44.119 19:38:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:44.119 19:38:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:44.119 256+0 records in 00:04:44.119 256+0 records out 00:04:44.119 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0067895 s, 154 MB/s 00:04:44.119 19:38:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:44.119 19:38:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:44.119 256+0 records in 00:04:44.119 256+0 records out 00:04:44.119 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0157898 s, 66.4 MB/s 00:04:44.119 19:38:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:44.119 19:38:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:44.119 256+0 records in 00:04:44.119 256+0 records out 00:04:44.119 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0172115 s, 60.9 MB/s 00:04:44.119 19:38:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:44.119 19:38:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.119 19:38:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:44.119 19:38:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:44.119 19:38:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:44.119 19:38:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:44.119 19:38:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:44.119 19:38:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:44.119 19:38:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:44.119 19:38:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:44.119 19:38:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:44.119 19:38:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:44.119 19:38:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:44.119 19:38:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.119 19:38:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.119 19:38:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:44.119 19:38:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:44.119 19:38:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:44.119 19:38:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:44.376 19:38:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:44.376 19:38:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:44.376 19:38:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:44.376 19:38:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:44.376 19:38:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:44.376 19:38:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:44.376 19:38:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:44.376 19:38:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:44.376 19:38:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:44.376 19:38:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:44.634 19:38:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:44.634 19:38:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:44.634 19:38:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:44.634 19:38:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:44.634 19:38:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:44.634 19:38:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:44.634 19:38:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:44.634 19:38:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:44.634 19:38:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:44.634 19:38:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.634 19:38:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:44.892 19:38:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:44.892 19:38:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:44.892 19:38:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:44.892 19:38:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:44.892 19:38:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:44.892 19:38:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:44.892 19:38:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:44.892 19:38:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:44.892 19:38:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:44.892 19:38:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:44.892 19:38:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:44.892 19:38:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:44.892 19:38:39 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:45.150 19:38:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:45.150 [2024-11-26 19:38:40.284032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:45.150 [2024-11-26 19:38:40.319467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:45.150 [2024-11-26 19:38:40.319660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.150 [2024-11-26 19:38:40.350815] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:45.150 [2024-11-26 19:38:40.350873] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:45.150 [2024-11-26 19:38:40.350881] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:48.436 spdk_app_start Round 1 00:04:48.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:48.436 19:38:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:48.436 19:38:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:48.436 19:38:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 57522 /var/tmp/spdk-nbd.sock 00:04:48.436 19:38:43 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 57522 ']' 00:04:48.436 19:38:43 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:48.436 19:38:43 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:48.436 19:38:43 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:48.436 19:38:43 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:48.436 19:38:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:48.436 19:38:43 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:48.436 19:38:43 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:48.436 19:38:43 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:48.436 Malloc0 00:04:48.436 19:38:43 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:48.695 Malloc1 00:04:48.695 19:38:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:48.695 19:38:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.695 19:38:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:48.695 19:38:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:48.695 19:38:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.695 19:38:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:48.695 19:38:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:48.695 19:38:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.695 19:38:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:48.695 19:38:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:48.695 19:38:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.695 19:38:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:48.695 19:38:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:48.695 19:38:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:48.695 19:38:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:48.695 19:38:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:48.953 /dev/nbd0 00:04:48.953 19:38:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:48.953 19:38:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:48.953 19:38:44 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:48.953 19:38:44 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:48.953 19:38:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:48.953 19:38:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:48.953 19:38:44 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:48.953 19:38:44 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:48.953 19:38:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:48.953 19:38:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:48.953 19:38:44 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:48.953 1+0 records in 00:04:48.953 1+0 records out 00:04:48.953 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000220021 s, 18.6 MB/s 00:04:48.953 19:38:44 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:48.953 19:38:44 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:48.953 19:38:44 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:48.953 19:38:44 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:48.953 19:38:44 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:48.953 19:38:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:48.953 19:38:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:48.953 19:38:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:49.210 /dev/nbd1 00:04:49.210 19:38:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:49.210 19:38:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:49.210 19:38:44 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:49.210 19:38:44 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:49.210 19:38:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:49.210 19:38:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:49.210 19:38:44 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:49.210 19:38:44 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:49.210 19:38:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:49.210 19:38:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:49.210 19:38:44 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:49.210 1+0 records in 00:04:49.210 1+0 records out 00:04:49.210 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246774 s, 16.6 MB/s 00:04:49.210 19:38:44 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:49.210 19:38:44 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:49.210 19:38:44 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:49.210 19:38:44 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:49.210 19:38:44 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:49.210 19:38:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:49.210 19:38:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:49.210 19:38:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:49.210 19:38:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.210 19:38:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:49.468 19:38:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:49.468 { 00:04:49.468 "nbd_device": "/dev/nbd0", 00:04:49.468 "bdev_name": "Malloc0" 00:04:49.468 }, 00:04:49.468 { 00:04:49.468 "nbd_device": "/dev/nbd1", 00:04:49.468 "bdev_name": "Malloc1" 00:04:49.468 } 00:04:49.468 ]' 00:04:49.468 19:38:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:49.468 { 00:04:49.468 "nbd_device": "/dev/nbd0", 00:04:49.468 "bdev_name": "Malloc0" 00:04:49.468 }, 00:04:49.468 { 00:04:49.468 "nbd_device": "/dev/nbd1", 00:04:49.468 "bdev_name": "Malloc1" 00:04:49.468 } 00:04:49.468 ]' 00:04:49.468 19:38:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:49.468 19:38:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:49.468 /dev/nbd1' 00:04:49.468 19:38:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:49.468 /dev/nbd1' 00:04:49.468 19:38:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:49.468 19:38:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:49.468 19:38:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:49.468 19:38:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:49.468 19:38:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:49.468 19:38:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:49.468 19:38:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.468 19:38:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:49.468 19:38:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:49.468 19:38:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:49.468 19:38:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:49.468 19:38:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:49.468 256+0 records in 00:04:49.468 256+0 records out 00:04:49.468 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00758256 s, 138 MB/s 00:04:49.468 19:38:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:49.468 19:38:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:49.468 256+0 records in 00:04:49.468 256+0 records out 00:04:49.468 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143327 s, 73.2 MB/s 00:04:49.468 19:38:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:49.468 19:38:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:49.468 256+0 records in 00:04:49.468 256+0 records out 00:04:49.468 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.015882 s, 66.0 MB/s 00:04:49.468 19:38:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:49.468 19:38:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.468 19:38:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:49.468 19:38:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:49.468 19:38:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:49.468 19:38:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:49.468 19:38:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:49.468 19:38:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:49.468 19:38:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:49.468 19:38:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:49.468 19:38:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:49.468 19:38:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:49.468 19:38:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:49.468 19:38:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.468 19:38:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.468 19:38:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:49.468 19:38:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:49.468 19:38:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:49.468 19:38:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:49.726 19:38:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:49.726 19:38:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:49.726 19:38:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:49.726 19:38:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:49.726 19:38:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:49.726 19:38:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:49.726 19:38:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:49.726 19:38:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:49.726 19:38:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:49.726 19:38:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:49.983 19:38:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:49.983 19:38:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:49.983 19:38:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:49.983 19:38:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:49.983 19:38:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:49.983 19:38:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:49.983 19:38:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:49.983 19:38:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:49.983 19:38:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:49.983 19:38:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.983 19:38:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:50.240 19:38:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:50.240 19:38:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:50.240 19:38:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:50.240 19:38:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:50.240 19:38:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:50.240 19:38:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:50.240 19:38:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:50.240 19:38:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:50.240 19:38:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:50.240 19:38:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:50.240 19:38:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:50.240 19:38:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:50.240 19:38:45 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:50.496 19:38:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:50.496 [2024-11-26 19:38:45.624572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:50.496 [2024-11-26 19:38:45.656647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.496 [2024-11-26 19:38:45.656785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.496 [2024-11-26 19:38:45.687682] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:50.496 [2024-11-26 19:38:45.687741] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:50.496 [2024-11-26 19:38:45.687747] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:53.849 19:38:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:53.849 spdk_app_start Round 2 00:04:53.849 19:38:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:53.849 19:38:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 57522 /var/tmp/spdk-nbd.sock 00:04:53.849 19:38:48 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 57522 ']' 00:04:53.849 19:38:48 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:53.849 19:38:48 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:53.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:53.849 19:38:48 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:53.849 19:38:48 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:53.849 19:38:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:53.849 19:38:48 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:53.849 19:38:48 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:53.849 19:38:48 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:53.849 Malloc0 00:04:53.850 19:38:48 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:54.108 Malloc1 00:04:54.108 19:38:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:54.108 19:38:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.108 19:38:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:54.108 19:38:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:54.108 19:38:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.108 19:38:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:54.108 19:38:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:54.108 19:38:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.108 19:38:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:54.108 19:38:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:54.108 19:38:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.108 19:38:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:54.108 19:38:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:54.108 19:38:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:54.108 19:38:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:54.108 19:38:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:54.367 /dev/nbd0 00:04:54.367 19:38:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:54.367 19:38:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:54.367 19:38:49 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:54.367 19:38:49 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:54.367 19:38:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:54.367 19:38:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:54.367 19:38:49 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:54.367 19:38:49 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:54.367 19:38:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:54.367 19:38:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:54.367 19:38:49 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:54.367 1+0 records in 00:04:54.367 1+0 records out 00:04:54.367 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000120514 s, 34.0 MB/s 00:04:54.367 19:38:49 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:54.367 19:38:49 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:54.367 19:38:49 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:54.367 19:38:49 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:54.367 19:38:49 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:54.367 19:38:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:54.367 19:38:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:54.367 19:38:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:54.624 /dev/nbd1 00:04:54.624 19:38:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:54.624 19:38:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:54.624 19:38:49 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:54.624 19:38:49 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:54.624 19:38:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:54.624 19:38:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:54.624 19:38:49 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:54.624 19:38:49 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:54.624 19:38:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:54.624 19:38:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:54.624 19:38:49 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:54.624 1+0 records in 00:04:54.624 1+0 records out 00:04:54.624 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000146378 s, 28.0 MB/s 00:04:54.624 19:38:49 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:54.624 19:38:49 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:54.624 19:38:49 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:54.624 19:38:49 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:54.624 19:38:49 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:54.624 19:38:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:54.624 19:38:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:54.624 19:38:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:54.624 19:38:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.624 19:38:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:54.883 19:38:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:54.883 { 00:04:54.883 "nbd_device": "/dev/nbd0", 00:04:54.883 "bdev_name": "Malloc0" 00:04:54.883 }, 00:04:54.883 { 00:04:54.883 "nbd_device": "/dev/nbd1", 00:04:54.883 "bdev_name": "Malloc1" 00:04:54.883 } 00:04:54.883 ]' 00:04:54.883 19:38:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:54.883 { 00:04:54.883 "nbd_device": "/dev/nbd0", 00:04:54.883 "bdev_name": "Malloc0" 00:04:54.883 }, 00:04:54.883 { 00:04:54.883 "nbd_device": "/dev/nbd1", 00:04:54.883 "bdev_name": "Malloc1" 00:04:54.883 } 00:04:54.883 ]' 00:04:54.883 19:38:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:54.883 19:38:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:54.883 /dev/nbd1' 00:04:54.883 19:38:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:54.883 /dev/nbd1' 00:04:54.883 19:38:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:54.883 19:38:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:54.883 19:38:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:54.883 19:38:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:54.883 19:38:49 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:54.883 19:38:49 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:54.883 19:38:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.883 19:38:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:54.883 19:38:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:54.883 19:38:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:54.883 19:38:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:54.883 19:38:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:54.883 256+0 records in 00:04:54.883 256+0 records out 00:04:54.883 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00758532 s, 138 MB/s 00:04:54.883 19:38:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:54.883 19:38:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:54.883 256+0 records in 00:04:54.883 256+0 records out 00:04:54.883 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140501 s, 74.6 MB/s 00:04:54.883 19:38:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:54.883 19:38:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:54.883 256+0 records in 00:04:54.883 256+0 records out 00:04:54.883 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148745 s, 70.5 MB/s 00:04:54.883 19:38:49 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:54.883 19:38:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.883 19:38:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:54.883 19:38:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:54.883 19:38:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:54.883 19:38:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:54.883 19:38:49 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:54.883 19:38:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:54.883 19:38:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:54.883 19:38:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:54.883 19:38:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:54.883 19:38:49 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:54.883 19:38:49 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:54.883 19:38:49 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.883 19:38:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.883 19:38:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:54.883 19:38:49 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:54.883 19:38:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:54.883 19:38:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:55.141 19:38:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:55.141 19:38:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:55.141 19:38:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:55.141 19:38:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:55.141 19:38:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:55.141 19:38:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:55.141 19:38:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:55.142 19:38:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:55.142 19:38:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:55.142 19:38:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:55.398 19:38:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:55.398 19:38:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:55.398 19:38:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:55.398 19:38:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:55.398 19:38:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:55.398 19:38:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:55.398 19:38:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:55.399 19:38:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:55.399 19:38:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:55.399 19:38:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.399 19:38:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:55.399 19:38:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:55.399 19:38:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:55.399 19:38:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:55.657 19:38:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:55.657 19:38:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:55.657 19:38:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:55.657 19:38:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:55.657 19:38:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:55.657 19:38:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:55.657 19:38:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:55.657 19:38:50 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:55.657 19:38:50 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:55.657 19:38:50 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:55.657 19:38:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:55.914 [2024-11-26 19:38:50.934501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:55.914 [2024-11-26 19:38:50.965384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.914 [2024-11-26 19:38:50.965392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.914 [2024-11-26 19:38:50.994669] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:55.914 [2024-11-26 19:38:50.994720] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:55.914 [2024-11-26 19:38:50.994726] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:59.213 19:38:53 event.app_repeat -- event/event.sh@38 -- # waitforlisten 57522 /var/tmp/spdk-nbd.sock 00:04:59.213 19:38:53 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 57522 ']' 00:04:59.213 19:38:53 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:59.213 19:38:53 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:59.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:59.213 19:38:53 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:59.213 19:38:53 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:59.213 19:38:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:59.213 19:38:54 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.213 19:38:54 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:59.213 19:38:54 event.app_repeat -- event/event.sh@39 -- # killprocess 57522 00:04:59.213 19:38:54 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 57522 ']' 00:04:59.213 19:38:54 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 57522 00:04:59.213 19:38:54 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:04:59.213 19:38:54 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:59.213 19:38:54 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57522 00:04:59.213 19:38:54 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:59.213 19:38:54 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:59.213 killing process with pid 57522 00:04:59.213 19:38:54 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57522' 00:04:59.213 19:38:54 event.app_repeat -- common/autotest_common.sh@973 -- # kill 57522 00:04:59.213 19:38:54 event.app_repeat -- common/autotest_common.sh@978 -- # wait 57522 00:04:59.213 spdk_app_start is called in Round 0. 00:04:59.213 Shutdown signal received, stop current app iteration 00:04:59.213 Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 reinitialization... 00:04:59.213 spdk_app_start is called in Round 1. 00:04:59.213 Shutdown signal received, stop current app iteration 00:04:59.213 Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 reinitialization... 00:04:59.213 spdk_app_start is called in Round 2. 00:04:59.213 Shutdown signal received, stop current app iteration 00:04:59.213 Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 reinitialization... 00:04:59.213 spdk_app_start is called in Round 3. 00:04:59.213 Shutdown signal received, stop current app iteration 00:04:59.213 19:38:54 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:59.213 19:38:54 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:59.213 00:04:59.213 real 0m16.981s 00:04:59.213 user 0m38.120s 00:04:59.213 sys 0m2.022s 00:04:59.213 19:38:54 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.213 19:38:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:59.213 ************************************ 00:04:59.213 END TEST app_repeat 00:04:59.213 ************************************ 00:04:59.213 19:38:54 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:59.213 19:38:54 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:59.213 19:38:54 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.213 19:38:54 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.213 19:38:54 event -- common/autotest_common.sh@10 -- # set +x 00:04:59.213 ************************************ 00:04:59.213 START TEST cpu_locks 00:04:59.213 ************************************ 00:04:59.213 19:38:54 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:59.213 * Looking for test storage... 00:04:59.213 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:59.213 19:38:54 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:59.213 19:38:54 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:04:59.213 19:38:54 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:59.213 19:38:54 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:59.213 19:38:54 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.213 19:38:54 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.213 19:38:54 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.213 19:38:54 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.213 19:38:54 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.213 19:38:54 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.213 19:38:54 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.213 19:38:54 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.213 19:38:54 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.213 19:38:54 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.213 19:38:54 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.213 19:38:54 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:59.213 19:38:54 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:59.213 19:38:54 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.213 19:38:54 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.213 19:38:54 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:59.213 19:38:54 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:59.213 19:38:54 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.213 19:38:54 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:59.213 19:38:54 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.214 19:38:54 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:59.214 19:38:54 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:59.214 19:38:54 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.214 19:38:54 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:59.214 19:38:54 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.214 19:38:54 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.214 19:38:54 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.214 19:38:54 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:59.214 19:38:54 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.214 19:38:54 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:59.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.214 --rc genhtml_branch_coverage=1 00:04:59.214 --rc genhtml_function_coverage=1 00:04:59.214 --rc genhtml_legend=1 00:04:59.214 --rc geninfo_all_blocks=1 00:04:59.214 --rc geninfo_unexecuted_blocks=1 00:04:59.214 00:04:59.214 ' 00:04:59.214 19:38:54 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:59.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.214 --rc genhtml_branch_coverage=1 00:04:59.214 --rc genhtml_function_coverage=1 00:04:59.214 --rc genhtml_legend=1 00:04:59.214 --rc geninfo_all_blocks=1 00:04:59.214 --rc geninfo_unexecuted_blocks=1 00:04:59.214 00:04:59.214 ' 00:04:59.214 19:38:54 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:59.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.214 --rc genhtml_branch_coverage=1 00:04:59.214 --rc genhtml_function_coverage=1 00:04:59.214 --rc genhtml_legend=1 00:04:59.214 --rc geninfo_all_blocks=1 00:04:59.214 --rc geninfo_unexecuted_blocks=1 00:04:59.214 00:04:59.214 ' 00:04:59.214 19:38:54 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:59.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.214 --rc genhtml_branch_coverage=1 00:04:59.214 --rc genhtml_function_coverage=1 00:04:59.214 --rc genhtml_legend=1 00:04:59.214 --rc geninfo_all_blocks=1 00:04:59.214 --rc geninfo_unexecuted_blocks=1 00:04:59.214 00:04:59.214 ' 00:04:59.214 19:38:54 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:59.214 19:38:54 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:59.214 19:38:54 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:59.214 19:38:54 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:59.214 19:38:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.214 19:38:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.214 19:38:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:59.214 ************************************ 00:04:59.214 START TEST default_locks 00:04:59.214 ************************************ 00:04:59.214 19:38:54 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:04:59.214 19:38:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=57946 00:04:59.214 19:38:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 57946 00:04:59.214 19:38:54 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 57946 ']' 00:04:59.214 19:38:54 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.214 19:38:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:59.214 19:38:54 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:59.214 19:38:54 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.214 19:38:54 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:59.214 19:38:54 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:59.214 [2024-11-26 19:38:54.444129] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:04:59.214 [2024-11-26 19:38:54.444754] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57946 ] 00:04:59.473 [2024-11-26 19:38:54.589479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.473 [2024-11-26 19:38:54.633028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.473 [2024-11-26 19:38:54.687195] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:00.407 19:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:00.407 19:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:00.407 19:38:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 57946 00:05:00.407 19:38:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:00.407 19:38:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 57946 00:05:00.407 19:38:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 57946 00:05:00.407 19:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 57946 ']' 00:05:00.407 19:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 57946 00:05:00.407 19:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:00.407 19:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:00.407 19:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57946 00:05:00.407 killing process with pid 57946 00:05:00.407 19:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:00.407 19:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:00.407 19:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57946' 00:05:00.407 19:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 57946 00:05:00.407 19:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 57946 00:05:00.665 19:38:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 57946 00:05:00.665 19:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:00.665 19:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 57946 00:05:00.665 19:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:00.665 19:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:00.665 19:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:00.665 19:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:00.665 19:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 57946 00:05:00.665 19:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 57946 ']' 00:05:00.665 19:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.665 19:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.665 19:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.665 ERROR: process (pid: 57946) is no longer running 00:05:00.665 19:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.665 19:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:00.665 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (57946) - No such process 00:05:00.665 19:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:00.665 19:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:00.665 19:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:00.665 19:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:00.665 19:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:00.665 19:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:00.665 19:38:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:00.665 19:38:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:00.665 19:38:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:00.665 19:38:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:00.665 00:05:00.665 real 0m1.337s 00:05:00.665 user 0m1.426s 00:05:00.665 sys 0m0.348s 00:05:00.665 19:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.665 19:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:00.665 ************************************ 00:05:00.665 END TEST default_locks 00:05:00.665 ************************************ 00:05:00.665 19:38:55 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:00.665 19:38:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.665 19:38:55 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.665 19:38:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:00.665 ************************************ 00:05:00.665 START TEST default_locks_via_rpc 00:05:00.665 ************************************ 00:05:00.665 19:38:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:00.665 19:38:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=57987 00:05:00.665 19:38:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:00.665 19:38:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 57987 00:05:00.665 19:38:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 57987 ']' 00:05:00.665 19:38:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.665 19:38:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.665 19:38:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.666 19:38:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.666 19:38:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.666 [2024-11-26 19:38:55.805927] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:00.666 [2024-11-26 19:38:55.805997] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57987 ] 00:05:01.027 [2024-11-26 19:38:55.945441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.027 [2024-11-26 19:38:55.982244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.027 [2024-11-26 19:38:56.029652] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:01.603 19:38:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:01.603 19:38:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:01.603 19:38:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:01.603 19:38:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.603 19:38:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.603 19:38:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.603 19:38:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:01.603 19:38:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:01.603 19:38:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:01.603 19:38:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:01.603 19:38:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:01.603 19:38:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.603 19:38:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.603 19:38:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.603 19:38:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 57987 00:05:01.603 19:38:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 57987 00:05:01.603 19:38:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:01.863 19:38:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 57987 00:05:01.863 19:38:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 57987 ']' 00:05:01.863 19:38:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 57987 00:05:01.863 19:38:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:01.863 19:38:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:01.863 19:38:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57987 00:05:01.863 19:38:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:01.863 19:38:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:01.863 19:38:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57987' 00:05:01.863 killing process with pid 57987 00:05:01.863 19:38:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 57987 00:05:01.863 19:38:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 57987 00:05:01.863 ************************************ 00:05:01.863 END TEST default_locks_via_rpc 00:05:01.863 ************************************ 00:05:01.863 00:05:01.863 real 0m1.332s 00:05:01.863 user 0m1.450s 00:05:01.863 sys 0m0.334s 00:05:01.863 19:38:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.863 19:38:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.134 19:38:57 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:02.134 19:38:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.134 19:38:57 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.134 19:38:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:02.134 ************************************ 00:05:02.134 START TEST non_locking_app_on_locked_coremask 00:05:02.134 ************************************ 00:05:02.134 19:38:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:02.134 19:38:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58027 00:05:02.134 19:38:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58027 /var/tmp/spdk.sock 00:05:02.134 19:38:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:02.134 19:38:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58027 ']' 00:05:02.134 19:38:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.134 19:38:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.134 19:38:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.134 19:38:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.134 19:38:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:02.134 [2024-11-26 19:38:57.183120] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:02.134 [2024-11-26 19:38:57.183190] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58027 ] 00:05:02.134 [2024-11-26 19:38:57.320984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.134 [2024-11-26 19:38:57.357603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.392 [2024-11-26 19:38:57.406809] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:02.957 19:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.957 19:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:02.957 19:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:02.957 19:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58043 00:05:02.957 19:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58043 /var/tmp/spdk2.sock 00:05:02.957 19:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58043 ']' 00:05:02.957 19:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:02.957 19:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:02.957 19:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:02.957 19:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.957 19:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:02.957 [2024-11-26 19:38:58.084838] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:02.957 [2024-11-26 19:38:58.084899] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58043 ] 00:05:03.215 [2024-11-26 19:38:58.238999] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:03.215 [2024-11-26 19:38:58.239046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.215 [2024-11-26 19:38:58.311343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.215 [2024-11-26 19:38:58.400049] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:03.780 19:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.780 19:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:03.780 19:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58027 00:05:03.780 19:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58027 00:05:03.780 19:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:04.345 19:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58027 00:05:04.345 19:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58027 ']' 00:05:04.345 19:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58027 00:05:04.345 19:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:04.345 19:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:04.345 19:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58027 00:05:04.345 19:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:04.345 killing process with pid 58027 00:05:04.345 19:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:04.345 19:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58027' 00:05:04.345 19:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58027 00:05:04.345 19:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58027 00:05:04.602 19:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58043 00:05:04.602 19:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58043 ']' 00:05:04.603 19:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58043 00:05:04.603 19:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:04.603 19:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:04.603 19:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58043 00:05:04.603 19:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:04.603 19:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:04.603 killing process with pid 58043 00:05:04.603 19:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58043' 00:05:04.603 19:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58043 00:05:04.603 19:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58043 00:05:04.861 00:05:04.861 real 0m2.813s 00:05:04.861 user 0m3.182s 00:05:04.861 sys 0m0.646s 00:05:04.861 19:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.861 19:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:04.861 ************************************ 00:05:04.861 END TEST non_locking_app_on_locked_coremask 00:05:04.861 ************************************ 00:05:04.861 19:38:59 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:04.861 19:38:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.861 19:38:59 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.861 19:38:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:04.861 ************************************ 00:05:04.861 START TEST locking_app_on_unlocked_coremask 00:05:04.861 ************************************ 00:05:04.861 19:38:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:04.861 19:38:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58099 00:05:04.861 19:38:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58099 /var/tmp/spdk.sock 00:05:04.861 19:38:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58099 ']' 00:05:04.861 19:38:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.861 19:38:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:04.861 19:38:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.861 19:38:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:04.861 19:38:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:04.861 19:38:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:04.861 [2024-11-26 19:39:00.035941] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:04.861 [2024-11-26 19:39:00.036009] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58099 ] 00:05:05.120 [2024-11-26 19:39:00.175958] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:05.120 [2024-11-26 19:39:00.176008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.120 [2024-11-26 19:39:00.212689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.120 [2024-11-26 19:39:00.260831] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:05.686 19:39:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:05.686 19:39:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:05.686 19:39:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58115 00:05:05.686 19:39:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58115 /var/tmp/spdk2.sock 00:05:05.686 19:39:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58115 ']' 00:05:05.686 19:39:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:05.686 19:39:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:05.686 19:39:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:05.686 19:39:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:05.686 19:39:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.686 19:39:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:05.944 [2024-11-26 19:39:00.941339] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:05.944 [2024-11-26 19:39:00.941411] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58115 ] 00:05:05.944 [2024-11-26 19:39:01.092399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.944 [2024-11-26 19:39:01.165354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.201 [2024-11-26 19:39:01.260886] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:06.764 19:39:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:06.764 19:39:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:06.764 19:39:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58115 00:05:06.764 19:39:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58115 00:05:06.764 19:39:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:07.020 19:39:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58099 00:05:07.020 19:39:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58099 ']' 00:05:07.020 19:39:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58099 00:05:07.020 19:39:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:07.020 19:39:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:07.020 19:39:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58099 00:05:07.021 19:39:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:07.021 19:39:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:07.021 killing process with pid 58099 00:05:07.021 19:39:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58099' 00:05:07.021 19:39:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58099 00:05:07.021 19:39:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58099 00:05:07.279 19:39:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58115 00:05:07.279 19:39:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58115 ']' 00:05:07.279 19:39:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58115 00:05:07.279 19:39:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:07.279 19:39:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:07.279 19:39:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58115 00:05:07.279 19:39:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:07.279 killing process with pid 58115 00:05:07.279 19:39:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:07.279 19:39:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58115' 00:05:07.279 19:39:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58115 00:05:07.279 19:39:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58115 00:05:07.537 00:05:07.537 real 0m2.649s 00:05:07.537 user 0m2.956s 00:05:07.537 sys 0m0.639s 00:05:07.537 19:39:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.537 19:39:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:07.537 ************************************ 00:05:07.537 END TEST locking_app_on_unlocked_coremask 00:05:07.537 ************************************ 00:05:07.537 19:39:02 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:07.537 19:39:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.537 19:39:02 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.537 19:39:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:07.537 ************************************ 00:05:07.537 START TEST locking_app_on_locked_coremask 00:05:07.537 ************************************ 00:05:07.537 19:39:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:07.537 19:39:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=58171 00:05:07.537 19:39:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 58171 /var/tmp/spdk.sock 00:05:07.537 19:39:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58171 ']' 00:05:07.537 19:39:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.537 19:39:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.537 19:39:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.537 19:39:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.537 19:39:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:07.537 19:39:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:07.537 [2024-11-26 19:39:02.715741] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:07.537 [2024-11-26 19:39:02.715811] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58171 ] 00:05:07.794 [2024-11-26 19:39:02.842850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.794 [2024-11-26 19:39:02.876873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.794 [2024-11-26 19:39:02.919138] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:08.366 19:39:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:08.366 19:39:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:08.366 19:39:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=58186 00:05:08.366 19:39:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 58186 /var/tmp/spdk2.sock 00:05:08.366 19:39:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:08.366 19:39:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58186 /var/tmp/spdk2.sock 00:05:08.366 19:39:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:08.366 19:39:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:08.366 19:39:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:08.366 19:39:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:08.366 19:39:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:08.366 19:39:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 58186 /var/tmp/spdk2.sock 00:05:08.366 19:39:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58186 ']' 00:05:08.366 19:39:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:08.366 19:39:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:08.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:08.366 19:39:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:08.366 19:39:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:08.366 19:39:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:08.624 [2024-11-26 19:39:03.637068] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:08.624 [2024-11-26 19:39:03.637141] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58186 ] 00:05:08.624 [2024-11-26 19:39:03.782404] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 58171 has claimed it. 00:05:08.624 [2024-11-26 19:39:03.782458] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:09.189 ERROR: process (pid: 58186) is no longer running 00:05:09.189 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58186) - No such process 00:05:09.189 19:39:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.189 19:39:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:09.189 19:39:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:09.189 19:39:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:09.189 19:39:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:09.189 19:39:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:09.189 19:39:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 58171 00:05:09.189 19:39:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58171 00:05:09.189 19:39:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:09.447 19:39:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 58171 00:05:09.447 19:39:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58171 ']' 00:05:09.447 19:39:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58171 00:05:09.447 19:39:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:09.447 19:39:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:09.447 19:39:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58171 00:05:09.447 19:39:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:09.447 19:39:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:09.447 killing process with pid 58171 00:05:09.447 19:39:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58171' 00:05:09.447 19:39:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58171 00:05:09.447 19:39:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58171 00:05:09.705 00:05:09.705 real 0m2.052s 00:05:09.705 user 0m2.396s 00:05:09.705 sys 0m0.374s 00:05:09.705 19:39:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.705 19:39:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:09.705 ************************************ 00:05:09.705 END TEST locking_app_on_locked_coremask 00:05:09.705 ************************************ 00:05:09.705 19:39:04 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:09.705 19:39:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.705 19:39:04 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.705 19:39:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:09.705 ************************************ 00:05:09.705 START TEST locking_overlapped_coremask 00:05:09.705 ************************************ 00:05:09.705 19:39:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:09.705 19:39:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=58227 00:05:09.705 19:39:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 58227 /var/tmp/spdk.sock 00:05:09.705 19:39:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 58227 ']' 00:05:09.705 19:39:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.705 19:39:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.705 19:39:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.705 19:39:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.705 19:39:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:09.705 19:39:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:09.705 [2024-11-26 19:39:04.813803] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:09.706 [2024-11-26 19:39:04.813872] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58227 ] 00:05:09.963 [2024-11-26 19:39:04.952312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:09.963 [2024-11-26 19:39:04.989389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.963 [2024-11-26 19:39:04.989460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:09.963 [2024-11-26 19:39:04.989480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.963 [2024-11-26 19:39:05.034971] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:10.528 19:39:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:10.528 19:39:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:10.528 19:39:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=58245 00:05:10.528 19:39:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 58245 /var/tmp/spdk2.sock 00:05:10.528 19:39:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:10.528 19:39:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58245 /var/tmp/spdk2.sock 00:05:10.528 19:39:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:10.528 19:39:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:10.528 19:39:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:10.528 19:39:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:10.529 19:39:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:10.529 19:39:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 58245 /var/tmp/spdk2.sock 00:05:10.529 19:39:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 58245 ']' 00:05:10.529 19:39:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:10.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:10.529 19:39:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:10.529 19:39:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:10.529 19:39:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:10.529 19:39:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:10.529 [2024-11-26 19:39:05.716329] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:10.529 [2024-11-26 19:39:05.716707] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58245 ] 00:05:10.786 [2024-11-26 19:39:05.868959] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58227 has claimed it. 00:05:10.786 [2024-11-26 19:39:05.869012] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:11.351 ERROR: process (pid: 58245) is no longer running 00:05:11.351 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58245) - No such process 00:05:11.351 19:39:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.351 19:39:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:11.351 19:39:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:11.351 19:39:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:11.351 19:39:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:11.352 19:39:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:11.352 19:39:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:11.352 19:39:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:11.352 19:39:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:11.352 19:39:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:11.352 19:39:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 58227 00:05:11.352 19:39:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 58227 ']' 00:05:11.352 19:39:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 58227 00:05:11.352 19:39:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:11.352 19:39:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:11.352 19:39:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58227 00:05:11.352 killing process with pid 58227 00:05:11.352 19:39:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:11.352 19:39:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:11.352 19:39:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58227' 00:05:11.352 19:39:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 58227 00:05:11.352 19:39:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 58227 00:05:11.610 ************************************ 00:05:11.610 END TEST locking_overlapped_coremask 00:05:11.610 ************************************ 00:05:11.610 00:05:11.610 real 0m1.835s 00:05:11.610 user 0m5.253s 00:05:11.610 sys 0m0.268s 00:05:11.610 19:39:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.610 19:39:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:11.610 19:39:06 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:11.610 19:39:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.610 19:39:06 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.610 19:39:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.610 ************************************ 00:05:11.610 START TEST locking_overlapped_coremask_via_rpc 00:05:11.610 ************************************ 00:05:11.610 19:39:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:11.610 19:39:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=58280 00:05:11.610 19:39:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 58280 /var/tmp/spdk.sock 00:05:11.610 19:39:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:11.610 19:39:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58280 ']' 00:05:11.610 19:39:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.610 19:39:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.610 19:39:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.610 19:39:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.610 19:39:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.610 [2024-11-26 19:39:06.686908] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:11.610 [2024-11-26 19:39:06.686963] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58280 ] 00:05:11.610 [2024-11-26 19:39:06.820004] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:11.610 [2024-11-26 19:39:06.820038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:11.868 [2024-11-26 19:39:06.855077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.868 [2024-11-26 19:39:06.855409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:11.868 [2024-11-26 19:39:06.855533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.868 [2024-11-26 19:39:06.897605] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:12.434 19:39:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.434 19:39:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:12.434 19:39:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:12.434 19:39:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=58298 00:05:12.434 19:39:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 58298 /var/tmp/spdk2.sock 00:05:12.434 19:39:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58298 ']' 00:05:12.434 19:39:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:12.434 19:39:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.434 19:39:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:12.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:12.434 19:39:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.434 19:39:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.434 [2024-11-26 19:39:07.588178] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:12.434 [2024-11-26 19:39:07.588654] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58298 ] 00:05:12.693 [2024-11-26 19:39:07.740164] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:12.693 [2024-11-26 19:39:07.740202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:12.693 [2024-11-26 19:39:07.821129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:12.693 [2024-11-26 19:39:07.824880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:12.693 [2024-11-26 19:39:07.824884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:12.693 [2024-11-26 19:39:07.924922] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:13.259 19:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.259 19:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:13.259 19:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:13.259 19:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.259 19:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.259 19:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.259 19:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:13.259 19:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:13.259 19:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:13.259 19:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:13.259 19:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:13.259 19:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:13.259 19:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:13.259 19:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:13.259 19:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.259 19:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.259 [2024-11-26 19:39:08.497862] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58280 has claimed it. 00:05:13.518 request: 00:05:13.518 { 00:05:13.518 "method": "framework_enable_cpumask_locks", 00:05:13.518 "req_id": 1 00:05:13.518 } 00:05:13.518 Got JSON-RPC error response 00:05:13.518 response: 00:05:13.518 { 00:05:13.518 "code": -32603, 00:05:13.518 "message": "Failed to claim CPU core: 2" 00:05:13.518 } 00:05:13.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.519 19:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:13.519 19:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:13.519 19:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:13.519 19:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:13.519 19:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:13.519 19:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 58280 /var/tmp/spdk.sock 00:05:13.519 19:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58280 ']' 00:05:13.519 19:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.519 19:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.519 19:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.519 19:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.519 19:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:13.519 19:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.519 19:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:13.519 19:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 58298 /var/tmp/spdk2.sock 00:05:13.519 19:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58298 ']' 00:05:13.519 19:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:13.519 19:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.519 19:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:13.519 19:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.519 19:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.777 ************************************ 00:05:13.777 END TEST locking_overlapped_coremask_via_rpc 00:05:13.777 ************************************ 00:05:13.777 19:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.777 19:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:13.777 19:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:13.777 19:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:13.777 19:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:13.777 19:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:13.777 00:05:13.777 real 0m2.294s 00:05:13.777 user 0m1.077s 00:05:13.777 sys 0m0.134s 00:05:13.777 19:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.777 19:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.777 19:39:08 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:13.777 19:39:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58280 ]] 00:05:13.777 19:39:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58280 00:05:13.777 19:39:08 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58280 ']' 00:05:13.777 19:39:08 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58280 00:05:13.777 19:39:08 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:13.777 19:39:08 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:13.777 19:39:08 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58280 00:05:13.777 killing process with pid 58280 00:05:13.777 19:39:08 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:13.777 19:39:08 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:13.777 19:39:08 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58280' 00:05:13.777 19:39:08 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 58280 00:05:13.777 19:39:08 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 58280 00:05:14.035 19:39:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 58298 ]] 00:05:14.035 19:39:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 58298 00:05:14.035 19:39:09 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58298 ']' 00:05:14.035 19:39:09 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58298 00:05:14.035 19:39:09 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:14.035 19:39:09 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:14.035 19:39:09 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58298 00:05:14.035 killing process with pid 58298 00:05:14.035 19:39:09 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:14.035 19:39:09 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:14.035 19:39:09 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58298' 00:05:14.035 19:39:09 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 58298 00:05:14.035 19:39:09 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 58298 00:05:14.294 19:39:09 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:14.294 19:39:09 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:14.294 19:39:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58280 ]] 00:05:14.294 19:39:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58280 00:05:14.294 19:39:09 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58280 ']' 00:05:14.294 19:39:09 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58280 00:05:14.294 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (58280) - No such process 00:05:14.294 Process with pid 58280 is not found 00:05:14.294 19:39:09 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 58280 is not found' 00:05:14.294 19:39:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 58298 ]] 00:05:14.294 19:39:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 58298 00:05:14.294 Process with pid 58298 is not found 00:05:14.294 19:39:09 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58298 ']' 00:05:14.294 19:39:09 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58298 00:05:14.294 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (58298) - No such process 00:05:14.294 19:39:09 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 58298 is not found' 00:05:14.294 19:39:09 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:14.294 ************************************ 00:05:14.294 END TEST cpu_locks 00:05:14.294 ************************************ 00:05:14.294 00:05:14.294 real 0m15.188s 00:05:14.294 user 0m28.075s 00:05:14.294 sys 0m3.319s 00:05:14.294 19:39:09 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.294 19:39:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:14.294 00:05:14.294 real 0m39.112s 00:05:14.294 user 1m17.776s 00:05:14.294 sys 0m5.929s 00:05:14.294 19:39:09 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.294 19:39:09 event -- common/autotest_common.sh@10 -- # set +x 00:05:14.294 ************************************ 00:05:14.294 END TEST event 00:05:14.294 ************************************ 00:05:14.294 19:39:09 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:14.294 19:39:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.294 19:39:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.294 19:39:09 -- common/autotest_common.sh@10 -- # set +x 00:05:14.294 ************************************ 00:05:14.294 START TEST thread 00:05:14.294 ************************************ 00:05:14.294 19:39:09 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:14.552 * Looking for test storage... 00:05:14.552 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:14.552 19:39:09 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:14.552 19:39:09 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:05:14.552 19:39:09 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:14.552 19:39:09 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:14.552 19:39:09 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.552 19:39:09 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.552 19:39:09 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.552 19:39:09 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.552 19:39:09 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.552 19:39:09 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.552 19:39:09 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.552 19:39:09 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.552 19:39:09 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.552 19:39:09 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.552 19:39:09 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.552 19:39:09 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:14.552 19:39:09 thread -- scripts/common.sh@345 -- # : 1 00:05:14.552 19:39:09 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.552 19:39:09 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.552 19:39:09 thread -- scripts/common.sh@365 -- # decimal 1 00:05:14.552 19:39:09 thread -- scripts/common.sh@353 -- # local d=1 00:05:14.552 19:39:09 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.552 19:39:09 thread -- scripts/common.sh@355 -- # echo 1 00:05:14.552 19:39:09 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:14.552 19:39:09 thread -- scripts/common.sh@366 -- # decimal 2 00:05:14.552 19:39:09 thread -- scripts/common.sh@353 -- # local d=2 00:05:14.552 19:39:09 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.552 19:39:09 thread -- scripts/common.sh@355 -- # echo 2 00:05:14.552 19:39:09 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:14.552 19:39:09 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:14.552 19:39:09 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:14.552 19:39:09 thread -- scripts/common.sh@368 -- # return 0 00:05:14.552 19:39:09 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.552 19:39:09 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:14.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.552 --rc genhtml_branch_coverage=1 00:05:14.552 --rc genhtml_function_coverage=1 00:05:14.552 --rc genhtml_legend=1 00:05:14.552 --rc geninfo_all_blocks=1 00:05:14.552 --rc geninfo_unexecuted_blocks=1 00:05:14.552 00:05:14.552 ' 00:05:14.552 19:39:09 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:14.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.552 --rc genhtml_branch_coverage=1 00:05:14.552 --rc genhtml_function_coverage=1 00:05:14.552 --rc genhtml_legend=1 00:05:14.552 --rc geninfo_all_blocks=1 00:05:14.552 --rc geninfo_unexecuted_blocks=1 00:05:14.552 00:05:14.552 ' 00:05:14.552 19:39:09 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:14.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.552 --rc genhtml_branch_coverage=1 00:05:14.552 --rc genhtml_function_coverage=1 00:05:14.552 --rc genhtml_legend=1 00:05:14.552 --rc geninfo_all_blocks=1 00:05:14.552 --rc geninfo_unexecuted_blocks=1 00:05:14.552 00:05:14.553 ' 00:05:14.553 19:39:09 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:14.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.553 --rc genhtml_branch_coverage=1 00:05:14.553 --rc genhtml_function_coverage=1 00:05:14.553 --rc genhtml_legend=1 00:05:14.553 --rc geninfo_all_blocks=1 00:05:14.553 --rc geninfo_unexecuted_blocks=1 00:05:14.553 00:05:14.553 ' 00:05:14.553 19:39:09 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:14.553 19:39:09 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:14.553 19:39:09 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.553 19:39:09 thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.553 ************************************ 00:05:14.553 START TEST thread_poller_perf 00:05:14.553 ************************************ 00:05:14.553 19:39:09 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:14.553 [2024-11-26 19:39:09.679193] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:14.553 [2024-11-26 19:39:09.679337] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58423 ] 00:05:14.811 [2024-11-26 19:39:09.811973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.811 [2024-11-26 19:39:09.843570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.811 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:15.748 [2024-11-26T19:39:10.995Z] ====================================== 00:05:15.748 [2024-11-26T19:39:10.995Z] busy:2610840050 (cyc) 00:05:15.748 [2024-11-26T19:39:10.995Z] total_run_count: 397000 00:05:15.748 [2024-11-26T19:39:10.995Z] tsc_hz: 2600000000 (cyc) 00:05:15.748 [2024-11-26T19:39:10.995Z] ====================================== 00:05:15.748 [2024-11-26T19:39:10.995Z] poller_cost: 6576 (cyc), 2529 (nsec) 00:05:15.748 00:05:15.748 real 0m1.215s 00:05:15.748 user 0m1.083s 00:05:15.748 sys 0m0.027s 00:05:15.748 19:39:10 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.748 19:39:10 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:15.748 ************************************ 00:05:15.748 END TEST thread_poller_perf 00:05:15.748 ************************************ 00:05:15.748 19:39:10 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:15.748 19:39:10 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:15.748 19:39:10 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.748 19:39:10 thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.748 ************************************ 00:05:15.748 START TEST thread_poller_perf 00:05:15.748 ************************************ 00:05:15.748 19:39:10 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:15.748 [2024-11-26 19:39:10.931558] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:15.748 [2024-11-26 19:39:10.931727] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58458 ] 00:05:16.006 [2024-11-26 19:39:11.067395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.006 [2024-11-26 19:39:11.099021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.006 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:16.941 [2024-11-26T19:39:12.188Z] ====================================== 00:05:16.941 [2024-11-26T19:39:12.188Z] busy:2601927964 (cyc) 00:05:16.941 [2024-11-26T19:39:12.188Z] total_run_count: 5402000 00:05:16.941 [2024-11-26T19:39:12.188Z] tsc_hz: 2600000000 (cyc) 00:05:16.941 [2024-11-26T19:39:12.188Z] ====================================== 00:05:16.941 [2024-11-26T19:39:12.188Z] poller_cost: 481 (cyc), 185 (nsec) 00:05:16.941 ************************************ 00:05:16.941 END TEST thread_poller_perf 00:05:16.941 ************************************ 00:05:16.941 00:05:16.941 real 0m1.210s 00:05:16.941 user 0m1.080s 00:05:16.941 sys 0m0.026s 00:05:16.941 19:39:12 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.941 19:39:12 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:16.941 19:39:12 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:16.941 ************************************ 00:05:16.941 END TEST thread 00:05:16.941 ************************************ 00:05:16.941 00:05:16.941 real 0m2.660s 00:05:16.941 user 0m2.282s 00:05:16.941 sys 0m0.169s 00:05:16.941 19:39:12 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.941 19:39:12 thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.198 19:39:12 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:17.198 19:39:12 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:17.198 19:39:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.198 19:39:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.198 19:39:12 -- common/autotest_common.sh@10 -- # set +x 00:05:17.198 ************************************ 00:05:17.198 START TEST app_cmdline 00:05:17.198 ************************************ 00:05:17.198 19:39:12 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:17.198 * Looking for test storage... 00:05:17.198 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:17.198 19:39:12 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:17.198 19:39:12 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:05:17.198 19:39:12 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:17.198 19:39:12 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:17.198 19:39:12 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:17.198 19:39:12 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:17.198 19:39:12 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:17.198 19:39:12 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.198 19:39:12 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:17.198 19:39:12 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:17.198 19:39:12 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:17.198 19:39:12 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:17.198 19:39:12 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:17.198 19:39:12 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:17.198 19:39:12 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:17.198 19:39:12 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:17.198 19:39:12 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:17.198 19:39:12 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:17.198 19:39:12 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.198 19:39:12 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:17.199 19:39:12 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:17.199 19:39:12 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.199 19:39:12 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:17.199 19:39:12 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:17.199 19:39:12 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:17.199 19:39:12 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:17.199 19:39:12 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.199 19:39:12 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:17.199 19:39:12 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:17.199 19:39:12 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:17.199 19:39:12 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:17.199 19:39:12 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:17.199 19:39:12 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.199 19:39:12 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:17.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.199 --rc genhtml_branch_coverage=1 00:05:17.199 --rc genhtml_function_coverage=1 00:05:17.199 --rc genhtml_legend=1 00:05:17.199 --rc geninfo_all_blocks=1 00:05:17.199 --rc geninfo_unexecuted_blocks=1 00:05:17.199 00:05:17.199 ' 00:05:17.199 19:39:12 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:17.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.199 --rc genhtml_branch_coverage=1 00:05:17.199 --rc genhtml_function_coverage=1 00:05:17.199 --rc genhtml_legend=1 00:05:17.199 --rc geninfo_all_blocks=1 00:05:17.199 --rc geninfo_unexecuted_blocks=1 00:05:17.199 00:05:17.199 ' 00:05:17.199 19:39:12 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:17.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.199 --rc genhtml_branch_coverage=1 00:05:17.199 --rc genhtml_function_coverage=1 00:05:17.199 --rc genhtml_legend=1 00:05:17.199 --rc geninfo_all_blocks=1 00:05:17.199 --rc geninfo_unexecuted_blocks=1 00:05:17.199 00:05:17.199 ' 00:05:17.199 19:39:12 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:17.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.199 --rc genhtml_branch_coverage=1 00:05:17.199 --rc genhtml_function_coverage=1 00:05:17.199 --rc genhtml_legend=1 00:05:17.199 --rc geninfo_all_blocks=1 00:05:17.199 --rc geninfo_unexecuted_blocks=1 00:05:17.199 00:05:17.199 ' 00:05:17.199 19:39:12 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:17.199 19:39:12 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=58535 00:05:17.199 19:39:12 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 58535 00:05:17.199 19:39:12 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 58535 ']' 00:05:17.199 19:39:12 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.199 19:39:12 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.199 19:39:12 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.199 19:39:12 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.199 19:39:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:17.199 19:39:12 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:17.199 [2024-11-26 19:39:12.368694] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:17.199 [2024-11-26 19:39:12.368951] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58535 ] 00:05:17.456 [2024-11-26 19:39:12.505140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.456 [2024-11-26 19:39:12.537134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.456 [2024-11-26 19:39:12.579121] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:18.059 19:39:13 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.059 19:39:13 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:18.059 19:39:13 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:18.316 { 00:05:18.316 "version": "SPDK v25.01-pre git sha1 fc308e3c5", 00:05:18.316 "fields": { 00:05:18.316 "major": 25, 00:05:18.316 "minor": 1, 00:05:18.316 "patch": 0, 00:05:18.316 "suffix": "-pre", 00:05:18.316 "commit": "fc308e3c5" 00:05:18.316 } 00:05:18.316 } 00:05:18.316 19:39:13 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:18.316 19:39:13 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:18.316 19:39:13 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:18.316 19:39:13 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:18.316 19:39:13 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:18.316 19:39:13 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.316 19:39:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:18.316 19:39:13 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:18.316 19:39:13 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:18.316 19:39:13 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.316 19:39:13 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:18.316 19:39:13 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:18.316 19:39:13 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:18.316 19:39:13 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:18.316 19:39:13 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:18.316 19:39:13 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:18.316 19:39:13 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:18.316 19:39:13 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:18.316 19:39:13 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:18.316 19:39:13 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:18.316 19:39:13 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:18.316 19:39:13 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:18.316 19:39:13 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:18.316 19:39:13 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:18.574 request: 00:05:18.574 { 00:05:18.574 "method": "env_dpdk_get_mem_stats", 00:05:18.574 "req_id": 1 00:05:18.574 } 00:05:18.574 Got JSON-RPC error response 00:05:18.574 response: 00:05:18.574 { 00:05:18.574 "code": -32601, 00:05:18.574 "message": "Method not found" 00:05:18.574 } 00:05:18.574 19:39:13 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:18.574 19:39:13 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:18.574 19:39:13 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:18.574 19:39:13 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:18.574 19:39:13 app_cmdline -- app/cmdline.sh@1 -- # killprocess 58535 00:05:18.574 19:39:13 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 58535 ']' 00:05:18.574 19:39:13 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 58535 00:05:18.574 19:39:13 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:18.574 19:39:13 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:18.574 19:39:13 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58535 00:05:18.574 killing process with pid 58535 00:05:18.574 19:39:13 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:18.574 19:39:13 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:18.574 19:39:13 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58535' 00:05:18.574 19:39:13 app_cmdline -- common/autotest_common.sh@973 -- # kill 58535 00:05:18.574 19:39:13 app_cmdline -- common/autotest_common.sh@978 -- # wait 58535 00:05:18.832 ************************************ 00:05:18.832 END TEST app_cmdline 00:05:18.832 ************************************ 00:05:18.832 00:05:18.832 real 0m1.650s 00:05:18.832 user 0m2.056s 00:05:18.832 sys 0m0.288s 00:05:18.832 19:39:13 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.832 19:39:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:18.832 19:39:13 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:18.832 19:39:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.832 19:39:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.832 19:39:13 -- common/autotest_common.sh@10 -- # set +x 00:05:18.832 ************************************ 00:05:18.832 START TEST version 00:05:18.832 ************************************ 00:05:18.832 19:39:13 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:18.832 * Looking for test storage... 00:05:18.832 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:18.832 19:39:13 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:18.832 19:39:13 version -- common/autotest_common.sh@1693 -- # lcov --version 00:05:18.832 19:39:13 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:18.832 19:39:14 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:18.832 19:39:14 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.832 19:39:14 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.832 19:39:14 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.832 19:39:14 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.832 19:39:14 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.832 19:39:14 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.832 19:39:14 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.832 19:39:14 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.832 19:39:14 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.832 19:39:14 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.832 19:39:14 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.832 19:39:14 version -- scripts/common.sh@344 -- # case "$op" in 00:05:18.832 19:39:14 version -- scripts/common.sh@345 -- # : 1 00:05:18.832 19:39:14 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.832 19:39:14 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.832 19:39:14 version -- scripts/common.sh@365 -- # decimal 1 00:05:18.832 19:39:14 version -- scripts/common.sh@353 -- # local d=1 00:05:18.832 19:39:14 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.832 19:39:14 version -- scripts/common.sh@355 -- # echo 1 00:05:18.832 19:39:14 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.832 19:39:14 version -- scripts/common.sh@366 -- # decimal 2 00:05:18.832 19:39:14 version -- scripts/common.sh@353 -- # local d=2 00:05:18.832 19:39:14 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.832 19:39:14 version -- scripts/common.sh@355 -- # echo 2 00:05:18.832 19:39:14 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.832 19:39:14 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.832 19:39:14 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.832 19:39:14 version -- scripts/common.sh@368 -- # return 0 00:05:18.832 19:39:14 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.832 19:39:14 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:18.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.832 --rc genhtml_branch_coverage=1 00:05:18.832 --rc genhtml_function_coverage=1 00:05:18.832 --rc genhtml_legend=1 00:05:18.832 --rc geninfo_all_blocks=1 00:05:18.832 --rc geninfo_unexecuted_blocks=1 00:05:18.832 00:05:18.832 ' 00:05:18.832 19:39:14 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:18.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.832 --rc genhtml_branch_coverage=1 00:05:18.832 --rc genhtml_function_coverage=1 00:05:18.832 --rc genhtml_legend=1 00:05:18.832 --rc geninfo_all_blocks=1 00:05:18.832 --rc geninfo_unexecuted_blocks=1 00:05:18.832 00:05:18.833 ' 00:05:18.833 19:39:14 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:18.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.833 --rc genhtml_branch_coverage=1 00:05:18.833 --rc genhtml_function_coverage=1 00:05:18.833 --rc genhtml_legend=1 00:05:18.833 --rc geninfo_all_blocks=1 00:05:18.833 --rc geninfo_unexecuted_blocks=1 00:05:18.833 00:05:18.833 ' 00:05:18.833 19:39:14 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:18.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.833 --rc genhtml_branch_coverage=1 00:05:18.833 --rc genhtml_function_coverage=1 00:05:18.833 --rc genhtml_legend=1 00:05:18.833 --rc geninfo_all_blocks=1 00:05:18.833 --rc geninfo_unexecuted_blocks=1 00:05:18.833 00:05:18.833 ' 00:05:18.833 19:39:14 version -- app/version.sh@17 -- # get_header_version major 00:05:18.833 19:39:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:18.833 19:39:14 version -- app/version.sh@14 -- # tr -d '"' 00:05:18.833 19:39:14 version -- app/version.sh@14 -- # cut -f2 00:05:18.833 19:39:14 version -- app/version.sh@17 -- # major=25 00:05:18.833 19:39:14 version -- app/version.sh@18 -- # get_header_version minor 00:05:18.833 19:39:14 version -- app/version.sh@14 -- # tr -d '"' 00:05:18.833 19:39:14 version -- app/version.sh@14 -- # cut -f2 00:05:18.833 19:39:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:18.833 19:39:14 version -- app/version.sh@18 -- # minor=1 00:05:18.833 19:39:14 version -- app/version.sh@19 -- # get_header_version patch 00:05:18.833 19:39:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:18.833 19:39:14 version -- app/version.sh@14 -- # cut -f2 00:05:18.833 19:39:14 version -- app/version.sh@14 -- # tr -d '"' 00:05:18.833 19:39:14 version -- app/version.sh@19 -- # patch=0 00:05:18.833 19:39:14 version -- app/version.sh@20 -- # get_header_version suffix 00:05:18.833 19:39:14 version -- app/version.sh@14 -- # cut -f2 00:05:18.833 19:39:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:18.833 19:39:14 version -- app/version.sh@14 -- # tr -d '"' 00:05:18.833 19:39:14 version -- app/version.sh@20 -- # suffix=-pre 00:05:18.833 19:39:14 version -- app/version.sh@22 -- # version=25.1 00:05:18.833 19:39:14 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:18.833 19:39:14 version -- app/version.sh@28 -- # version=25.1rc0 00:05:18.833 19:39:14 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:18.833 19:39:14 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:18.833 19:39:14 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:18.833 19:39:14 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:18.833 ************************************ 00:05:18.833 END TEST version 00:05:18.833 ************************************ 00:05:18.833 00:05:18.833 real 0m0.182s 00:05:18.833 user 0m0.106s 00:05:18.833 sys 0m0.098s 00:05:18.833 19:39:14 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.833 19:39:14 version -- common/autotest_common.sh@10 -- # set +x 00:05:19.091 19:39:14 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:19.091 19:39:14 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:19.091 19:39:14 -- spdk/autotest.sh@194 -- # uname -s 00:05:19.091 19:39:14 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:19.091 19:39:14 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:19.091 19:39:14 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:05:19.091 19:39:14 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:05:19.091 19:39:14 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:05:19.091 19:39:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.091 19:39:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.091 19:39:14 -- common/autotest_common.sh@10 -- # set +x 00:05:19.091 ************************************ 00:05:19.091 START TEST spdk_dd 00:05:19.091 ************************************ 00:05:19.091 19:39:14 spdk_dd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:05:19.091 * Looking for test storage... 00:05:19.091 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:19.091 19:39:14 spdk_dd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:19.091 19:39:14 spdk_dd -- common/autotest_common.sh@1693 -- # lcov --version 00:05:19.091 19:39:14 spdk_dd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:19.091 19:39:14 spdk_dd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:19.091 19:39:14 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.091 19:39:14 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.091 19:39:14 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.091 19:39:14 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.091 19:39:14 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.091 19:39:14 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.091 19:39:14 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.091 19:39:14 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.091 19:39:14 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.091 19:39:14 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.091 19:39:14 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.091 19:39:14 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:05:19.091 19:39:14 spdk_dd -- scripts/common.sh@345 -- # : 1 00:05:19.091 19:39:14 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.091 19:39:14 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.091 19:39:14 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:05:19.091 19:39:14 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:05:19.091 19:39:14 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.091 19:39:14 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:05:19.091 19:39:14 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.091 19:39:14 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:05:19.091 19:39:14 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:05:19.091 19:39:14 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.091 19:39:14 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:05:19.091 19:39:14 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.091 19:39:14 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.091 19:39:14 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.091 19:39:14 spdk_dd -- scripts/common.sh@368 -- # return 0 00:05:19.091 19:39:14 spdk_dd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.091 19:39:14 spdk_dd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:19.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.091 --rc genhtml_branch_coverage=1 00:05:19.091 --rc genhtml_function_coverage=1 00:05:19.091 --rc genhtml_legend=1 00:05:19.091 --rc geninfo_all_blocks=1 00:05:19.091 --rc geninfo_unexecuted_blocks=1 00:05:19.091 00:05:19.091 ' 00:05:19.091 19:39:14 spdk_dd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:19.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.091 --rc genhtml_branch_coverage=1 00:05:19.091 --rc genhtml_function_coverage=1 00:05:19.091 --rc genhtml_legend=1 00:05:19.091 --rc geninfo_all_blocks=1 00:05:19.091 --rc geninfo_unexecuted_blocks=1 00:05:19.091 00:05:19.091 ' 00:05:19.091 19:39:14 spdk_dd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:19.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.091 --rc genhtml_branch_coverage=1 00:05:19.091 --rc genhtml_function_coverage=1 00:05:19.091 --rc genhtml_legend=1 00:05:19.091 --rc geninfo_all_blocks=1 00:05:19.091 --rc geninfo_unexecuted_blocks=1 00:05:19.091 00:05:19.091 ' 00:05:19.091 19:39:14 spdk_dd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:19.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.091 --rc genhtml_branch_coverage=1 00:05:19.091 --rc genhtml_function_coverage=1 00:05:19.091 --rc genhtml_legend=1 00:05:19.091 --rc geninfo_all_blocks=1 00:05:19.091 --rc geninfo_unexecuted_blocks=1 00:05:19.091 00:05:19.091 ' 00:05:19.091 19:39:14 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:19.091 19:39:14 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:05:19.091 19:39:14 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:19.091 19:39:14 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:19.091 19:39:14 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:19.091 19:39:14 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.091 19:39:14 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.091 19:39:14 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.091 19:39:14 spdk_dd -- paths/export.sh@5 -- # export PATH 00:05:19.091 19:39:14 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.091 19:39:14 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:19.350 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:19.350 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:19.350 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:19.350 19:39:14 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:05:19.350 19:39:14 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:05:19.350 19:39:14 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:05:19.350 19:39:14 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:05:19.350 19:39:14 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:05:19.350 19:39:14 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:05:19.350 19:39:14 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:05:19.350 19:39:14 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:05:19.350 19:39:14 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:05:19.350 19:39:14 spdk_dd -- scripts/common.sh@233 -- # local class 00:05:19.350 19:39:14 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:05:19.350 19:39:14 spdk_dd -- scripts/common.sh@235 -- # local progif 00:05:19.350 19:39:14 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:05:19.350 19:39:14 spdk_dd -- scripts/common.sh@236 -- # class=01 00:05:19.350 19:39:14 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:05:19.350 19:39:14 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:05:19.350 19:39:14 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:05:19.350 19:39:14 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:05:19.350 19:39:14 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:05:19.350 19:39:14 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:05:19.350 19:39:14 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:05:19.350 19:39:14 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:05:19.350 19:39:14 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:05:19.350 19:39:14 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:05:19.350 19:39:14 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:05:19.350 19:39:14 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:05:19.350 19:39:14 spdk_dd -- scripts/common.sh@18 -- # local i 00:05:19.350 19:39:14 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:05:19.350 19:39:14 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:05:19.350 19:39:14 spdk_dd -- scripts/common.sh@27 -- # return 0 00:05:19.350 19:39:14 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:05:19.350 19:39:14 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:05:19.350 19:39:14 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:05:19.350 19:39:14 spdk_dd -- scripts/common.sh@18 -- # local i 00:05:19.350 19:39:14 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:05:19.350 19:39:14 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:05:19.350 19:39:14 spdk_dd -- scripts/common.sh@27 -- # return 0 00:05:19.350 19:39:14 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:05:19.350 19:39:14 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:05:19.350 19:39:14 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:05:19.350 19:39:14 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:05:19.350 19:39:14 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:05:19.350 19:39:14 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:05:19.350 19:39:14 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:05:19.350 19:39:14 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:05:19.350 19:39:14 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:05:19.350 19:39:14 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:05:19.350 19:39:14 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:05:19.350 19:39:14 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:05:19.350 19:39:14 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:19.350 19:39:14 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@139 -- # local lib 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.11.0 == liburing.so.* ]] 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.12.0 == liburing.so.* ]] 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.11.0 == liburing.so.* ]] 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.12.0 == liburing.so.* ]] 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.7.0 == liburing.so.* ]] 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.350 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:05:19.351 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.351 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:05:19.351 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.351 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:05:19.351 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.351 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:05:19.351 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.351 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:05:19.351 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.351 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:05:19.351 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.351 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:05:19.351 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.351 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:05:19.351 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.351 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:05:19.351 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.351 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:05:19.351 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.351 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:05:19.351 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.351 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:05:19.351 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.351 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:05:19.351 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.351 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:05:19.351 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.351 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:05:19.351 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.351 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:05:19.351 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.351 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:05:19.351 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.351 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:05:19.351 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.351 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:05:19.351 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.351 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:05:19.351 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.351 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:05:19.351 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.351 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:05:19.351 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.351 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:05:19.351 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.351 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:05:19.351 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.351 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:05:19.351 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.351 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:05:19.351 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.351 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:05:19.351 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.610 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:05:19.610 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.610 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:05:19.610 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.610 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:05:19.610 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.610 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:05:19.610 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.610 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:05:19.610 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.610 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:05:19.610 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.610 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:05:19.610 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.610 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:05:19.610 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.610 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:05:19.610 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.610 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:05:19.610 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.610 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:05:19.610 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.610 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:05:19.610 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.610 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:05:19.610 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.610 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:05:19.610 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.610 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:05:19.610 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.610 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:05:19.610 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.610 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:05:19.610 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.610 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:05:19.610 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.610 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:05:19.610 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.610 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:05:19.610 19:39:14 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:19.610 19:39:14 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:05:19.610 19:39:14 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:05:19.610 * spdk_dd linked to liburing 00:05:19.610 19:39:14 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:05:19.610 19:39:14 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:05:19.610 19:39:14 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:05:19.610 19:39:14 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:05:19.610 19:39:14 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:05:19.610 19:39:14 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:05:19.610 19:39:14 spdk_dd -- dd/common.sh@153 -- # return 0 00:05:19.610 19:39:14 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:05:19.610 19:39:14 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:05:19.610 19:39:14 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:19.610 19:39:14 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.610 19:39:14 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:05:19.610 ************************************ 00:05:19.610 START TEST spdk_dd_basic_rw 00:05:19.610 ************************************ 00:05:19.610 19:39:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:05:19.610 * Looking for test storage... 00:05:19.610 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:19.610 19:39:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:19.610 19:39:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lcov --version 00:05:19.610 19:39:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:19.610 19:39:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:19.610 19:39:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.610 19:39:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.610 19:39:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.610 19:39:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.610 19:39:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.610 19:39:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.610 19:39:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.610 19:39:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.610 19:39:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.610 19:39:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.610 19:39:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.610 19:39:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:05:19.610 19:39:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:05:19.611 19:39:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.611 19:39:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.611 19:39:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:05:19.611 19:39:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:05:19.611 19:39:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.611 19:39:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:05:19.611 19:39:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.611 19:39:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:05:19.611 19:39:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:05:19.611 19:39:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.611 19:39:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:05:19.611 19:39:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.611 19:39:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.611 19:39:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.611 19:39:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:05:19.611 19:39:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.611 19:39:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:19.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.611 --rc genhtml_branch_coverage=1 00:05:19.611 --rc genhtml_function_coverage=1 00:05:19.611 --rc genhtml_legend=1 00:05:19.611 --rc geninfo_all_blocks=1 00:05:19.611 --rc geninfo_unexecuted_blocks=1 00:05:19.611 00:05:19.611 ' 00:05:19.611 19:39:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:19.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.611 --rc genhtml_branch_coverage=1 00:05:19.611 --rc genhtml_function_coverage=1 00:05:19.611 --rc genhtml_legend=1 00:05:19.611 --rc geninfo_all_blocks=1 00:05:19.611 --rc geninfo_unexecuted_blocks=1 00:05:19.611 00:05:19.611 ' 00:05:19.611 19:39:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:19.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.611 --rc genhtml_branch_coverage=1 00:05:19.611 --rc genhtml_function_coverage=1 00:05:19.611 --rc genhtml_legend=1 00:05:19.611 --rc geninfo_all_blocks=1 00:05:19.611 --rc geninfo_unexecuted_blocks=1 00:05:19.611 00:05:19.611 ' 00:05:19.611 19:39:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:19.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.611 --rc genhtml_branch_coverage=1 00:05:19.611 --rc genhtml_function_coverage=1 00:05:19.611 --rc genhtml_legend=1 00:05:19.611 --rc geninfo_all_blocks=1 00:05:19.611 --rc geninfo_unexecuted_blocks=1 00:05:19.611 00:05:19.611 ' 00:05:19.611 19:39:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:19.611 19:39:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:05:19.611 19:39:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:19.611 19:39:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:19.611 19:39:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:19.611 19:39:14 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.611 19:39:14 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.611 19:39:14 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.611 19:39:14 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:05:19.611 19:39:14 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.611 19:39:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:05:19.611 19:39:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:05:19.611 19:39:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:05:19.611 19:39:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:05:19.611 19:39:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:05:19.611 19:39:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:05:19.611 19:39:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:05:19.611 19:39:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:19.611 19:39:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:19.611 19:39:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:05:19.611 19:39:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:05:19.611 19:39:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:05:19.611 19:39:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:05:19.872 19:39:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:05:19.872 19:39:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:05:19.873 19:39:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:05:19.873 19:39:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:05:19.873 19:39:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:05:19.873 19:39:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:05:19.873 19:39:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:19.873 19:39:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:05:19.873 19:39:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:19.873 19:39:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:05:19.873 19:39:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.873 19:39:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:19.873 19:39:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:19.873 19:39:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:19.873 ************************************ 00:05:19.873 START TEST dd_bs_lt_native_bs 00:05:19.873 ************************************ 00:05:19.873 19:39:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1129 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:19.873 19:39:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # local es=0 00:05:19.873 19:39:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:19.873 19:39:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:19.873 19:39:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:19.873 19:39:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:19.873 19:39:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:19.873 19:39:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:19.873 19:39:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:19.873 19:39:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:19.873 19:39:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:19.873 19:39:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:19.873 { 00:05:19.873 "subsystems": [ 00:05:19.873 { 00:05:19.873 "subsystem": "bdev", 00:05:19.873 "config": [ 00:05:19.873 { 00:05:19.873 "params": { 00:05:19.873 "trtype": "pcie", 00:05:19.873 "traddr": "0000:00:10.0", 00:05:19.873 "name": "Nvme0" 00:05:19.873 }, 00:05:19.873 "method": "bdev_nvme_attach_controller" 00:05:19.873 }, 00:05:19.873 { 00:05:19.873 "method": "bdev_wait_for_examine" 00:05:19.873 } 00:05:19.873 ] 00:05:19.873 } 00:05:19.873 ] 00:05:19.873 } 00:05:19.873 [2024-11-26 19:39:14.979023] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:19.873 [2024-11-26 19:39:14.979082] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58880 ] 00:05:20.131 [2024-11-26 19:39:15.117831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.131 [2024-11-26 19:39:15.160018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.131 [2024-11-26 19:39:15.191649] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:20.131 [2024-11-26 19:39:15.285686] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:05:20.131 [2024-11-26 19:39:15.285890] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:20.131 [2024-11-26 19:39:15.352083] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:20.390 ************************************ 00:05:20.390 END TEST dd_bs_lt_native_bs 00:05:20.390 ************************************ 00:05:20.390 19:39:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # es=234 00:05:20.390 19:39:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:20.390 19:39:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@664 -- # es=106 00:05:20.390 19:39:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@665 -- # case "$es" in 00:05:20.390 19:39:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@672 -- # es=1 00:05:20.390 19:39:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:20.390 00:05:20.390 real 0m0.453s 00:05:20.390 user 0m0.285s 00:05:20.390 sys 0m0.102s 00:05:20.390 19:39:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.390 19:39:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:05:20.390 19:39:15 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:05:20.390 19:39:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:20.390 19:39:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.390 19:39:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:20.390 ************************************ 00:05:20.390 START TEST dd_rw 00:05:20.390 ************************************ 00:05:20.390 19:39:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1129 -- # basic_rw 4096 00:05:20.390 19:39:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:05:20.390 19:39:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:05:20.390 19:39:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:05:20.390 19:39:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:05:20.390 19:39:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:20.390 19:39:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:20.390 19:39:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:20.390 19:39:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:20.390 19:39:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:20.390 19:39:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:20.390 19:39:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:20.390 19:39:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:20.390 19:39:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:05:20.390 19:39:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:05:20.390 19:39:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:05:20.390 19:39:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:05:20.390 19:39:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:20.390 19:39:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:20.648 19:39:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:05:20.648 19:39:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:20.648 19:39:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:20.648 19:39:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:20.648 { 00:05:20.648 "subsystems": [ 00:05:20.648 { 00:05:20.648 "subsystem": "bdev", 00:05:20.648 "config": [ 00:05:20.648 { 00:05:20.648 "params": { 00:05:20.648 "trtype": "pcie", 00:05:20.648 "traddr": "0000:00:10.0", 00:05:20.648 "name": "Nvme0" 00:05:20.648 }, 00:05:20.648 "method": "bdev_nvme_attach_controller" 00:05:20.648 }, 00:05:20.648 { 00:05:20.648 "method": "bdev_wait_for_examine" 00:05:20.648 } 00:05:20.648 ] 00:05:20.648 } 00:05:20.648 ] 00:05:20.648 } 00:05:20.648 [2024-11-26 19:39:15.829422] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:20.648 [2024-11-26 19:39:15.829485] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58907 ] 00:05:20.905 [2024-11-26 19:39:15.966924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.905 [2024-11-26 19:39:16.008895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.905 [2024-11-26 19:39:16.043093] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:20.905  [2024-11-26T19:39:16.411Z] Copying: 60/60 [kB] (average 29 MBps) 00:05:21.164 00:05:21.164 19:39:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:05:21.164 19:39:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:21.164 19:39:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:21.164 19:39:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:21.164 { 00:05:21.164 "subsystems": [ 00:05:21.164 { 00:05:21.164 "subsystem": "bdev", 00:05:21.164 "config": [ 00:05:21.164 { 00:05:21.164 "params": { 00:05:21.164 "trtype": "pcie", 00:05:21.164 "traddr": "0000:00:10.0", 00:05:21.164 "name": "Nvme0" 00:05:21.164 }, 00:05:21.164 "method": "bdev_nvme_attach_controller" 00:05:21.164 }, 00:05:21.164 { 00:05:21.164 "method": "bdev_wait_for_examine" 00:05:21.164 } 00:05:21.164 ] 00:05:21.164 } 00:05:21.164 ] 00:05:21.164 } 00:05:21.164 [2024-11-26 19:39:16.293628] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:21.164 [2024-11-26 19:39:16.293695] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58921 ] 00:05:21.422 [2024-11-26 19:39:16.437831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.422 [2024-11-26 19:39:16.474299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.422 [2024-11-26 19:39:16.507984] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:21.422  [2024-11-26T19:39:16.928Z] Copying: 60/60 [kB] (average 19 MBps) 00:05:21.681 00:05:21.681 19:39:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:21.681 19:39:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:05:21.681 19:39:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:21.681 19:39:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:21.681 19:39:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:05:21.681 19:39:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:21.681 19:39:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:21.681 19:39:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:21.681 19:39:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:21.681 19:39:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:21.681 19:39:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:21.681 { 00:05:21.681 "subsystems": [ 00:05:21.681 { 00:05:21.681 "subsystem": "bdev", 00:05:21.681 "config": [ 00:05:21.681 { 00:05:21.681 "params": { 00:05:21.681 "trtype": "pcie", 00:05:21.681 "traddr": "0000:00:10.0", 00:05:21.681 "name": "Nvme0" 00:05:21.681 }, 00:05:21.681 "method": "bdev_nvme_attach_controller" 00:05:21.681 }, 00:05:21.681 { 00:05:21.681 "method": "bdev_wait_for_examine" 00:05:21.681 } 00:05:21.681 ] 00:05:21.681 } 00:05:21.681 ] 00:05:21.681 } 00:05:21.681 [2024-11-26 19:39:16.755612] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:21.681 [2024-11-26 19:39:16.755686] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58937 ] 00:05:21.681 [2024-11-26 19:39:16.895757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.941 [2024-11-26 19:39:16.932182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.941 [2024-11-26 19:39:16.963029] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:21.941  [2024-11-26T19:39:17.188Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:21.941 00:05:21.941 19:39:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:21.941 19:39:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:05:21.941 19:39:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:05:21.941 19:39:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:05:21.941 19:39:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:05:21.941 19:39:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:21.941 19:39:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:22.508 19:39:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:05:22.508 19:39:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:22.508 19:39:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:22.508 19:39:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:22.508 [2024-11-26 19:39:17.654544] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:22.508 [2024-11-26 19:39:17.654610] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58950 ] 00:05:22.508 { 00:05:22.508 "subsystems": [ 00:05:22.508 { 00:05:22.508 "subsystem": "bdev", 00:05:22.508 "config": [ 00:05:22.508 { 00:05:22.508 "params": { 00:05:22.508 "trtype": "pcie", 00:05:22.508 "traddr": "0000:00:10.0", 00:05:22.508 "name": "Nvme0" 00:05:22.508 }, 00:05:22.508 "method": "bdev_nvme_attach_controller" 00:05:22.508 }, 00:05:22.508 { 00:05:22.508 "method": "bdev_wait_for_examine" 00:05:22.508 } 00:05:22.508 ] 00:05:22.508 } 00:05:22.508 ] 00:05:22.508 } 00:05:22.766 [2024-11-26 19:39:17.793368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.766 [2024-11-26 19:39:17.828540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.766 [2024-11-26 19:39:17.860455] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:22.766  [2024-11-26T19:39:18.273Z] Copying: 60/60 [kB] (average 58 MBps) 00:05:23.026 00:05:23.026 19:39:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:05:23.026 19:39:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:23.026 19:39:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:23.026 19:39:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:23.026 [2024-11-26 19:39:18.101657] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:23.026 [2024-11-26 19:39:18.101724] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58969 ] 00:05:23.026 { 00:05:23.026 "subsystems": [ 00:05:23.026 { 00:05:23.026 "subsystem": "bdev", 00:05:23.026 "config": [ 00:05:23.026 { 00:05:23.026 "params": { 00:05:23.026 "trtype": "pcie", 00:05:23.026 "traddr": "0000:00:10.0", 00:05:23.026 "name": "Nvme0" 00:05:23.026 }, 00:05:23.026 "method": "bdev_nvme_attach_controller" 00:05:23.026 }, 00:05:23.026 { 00:05:23.026 "method": "bdev_wait_for_examine" 00:05:23.026 } 00:05:23.026 ] 00:05:23.026 } 00:05:23.026 ] 00:05:23.026 } 00:05:23.026 [2024-11-26 19:39:18.240419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.285 [2024-11-26 19:39:18.276456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.285 [2024-11-26 19:39:18.307594] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:23.285  [2024-11-26T19:39:18.532Z] Copying: 60/60 [kB] (average 58 MBps) 00:05:23.285 00:05:23.285 19:39:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:23.285 19:39:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:05:23.285 19:39:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:23.285 19:39:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:23.285 19:39:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:05:23.285 19:39:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:23.285 19:39:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:23.286 19:39:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:23.286 19:39:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:23.286 19:39:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:23.286 19:39:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:23.548 { 00:05:23.548 "subsystems": [ 00:05:23.548 { 00:05:23.548 "subsystem": "bdev", 00:05:23.548 "config": [ 00:05:23.548 { 00:05:23.548 "params": { 00:05:23.548 "trtype": "pcie", 00:05:23.548 "traddr": "0000:00:10.0", 00:05:23.548 "name": "Nvme0" 00:05:23.548 }, 00:05:23.548 "method": "bdev_nvme_attach_controller" 00:05:23.548 }, 00:05:23.548 { 00:05:23.548 "method": "bdev_wait_for_examine" 00:05:23.548 } 00:05:23.548 ] 00:05:23.548 } 00:05:23.548 ] 00:05:23.548 } 00:05:23.548 [2024-11-26 19:39:18.560897] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:23.548 [2024-11-26 19:39:18.560964] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58979 ] 00:05:23.548 [2024-11-26 19:39:18.702433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.548 [2024-11-26 19:39:18.740078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.548 [2024-11-26 19:39:18.772649] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:23.807  [2024-11-26T19:39:19.054Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:05:23.807 00:05:23.807 19:39:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:23.807 19:39:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:23.807 19:39:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:05:23.807 19:39:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:05:23.807 19:39:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:05:23.807 19:39:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:05:23.807 19:39:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:23.807 19:39:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:24.374 19:39:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:05:24.374 19:39:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:24.374 19:39:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:24.374 19:39:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:24.374 { 00:05:24.374 "subsystems": [ 00:05:24.374 { 00:05:24.374 "subsystem": "bdev", 00:05:24.374 "config": [ 00:05:24.374 { 00:05:24.374 "params": { 00:05:24.374 "trtype": "pcie", 00:05:24.374 "traddr": "0000:00:10.0", 00:05:24.374 "name": "Nvme0" 00:05:24.374 }, 00:05:24.374 "method": "bdev_nvme_attach_controller" 00:05:24.374 }, 00:05:24.374 { 00:05:24.374 "method": "bdev_wait_for_examine" 00:05:24.374 } 00:05:24.374 ] 00:05:24.374 } 00:05:24.374 ] 00:05:24.374 } 00:05:24.374 [2024-11-26 19:39:19.438070] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:24.374 [2024-11-26 19:39:19.438133] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58998 ] 00:05:24.374 [2024-11-26 19:39:19.589330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.632 [2024-11-26 19:39:19.638491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.632 [2024-11-26 19:39:19.675339] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:24.632  [2024-11-26T19:39:20.138Z] Copying: 56/56 [kB] (average 54 MBps) 00:05:24.891 00:05:24.891 19:39:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:24.891 19:39:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:05:24.891 19:39:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:24.891 19:39:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:24.891 [2024-11-26 19:39:19.927343] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:24.891 [2024-11-26 19:39:19.927413] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59012 ] 00:05:24.891 { 00:05:24.891 "subsystems": [ 00:05:24.891 { 00:05:24.891 "subsystem": "bdev", 00:05:24.891 "config": [ 00:05:24.891 { 00:05:24.891 "params": { 00:05:24.891 "trtype": "pcie", 00:05:24.891 "traddr": "0000:00:10.0", 00:05:24.891 "name": "Nvme0" 00:05:24.891 }, 00:05:24.891 "method": "bdev_nvme_attach_controller" 00:05:24.891 }, 00:05:24.891 { 00:05:24.891 "method": "bdev_wait_for_examine" 00:05:24.891 } 00:05:24.891 ] 00:05:24.891 } 00:05:24.891 ] 00:05:24.891 } 00:05:24.891 [2024-11-26 19:39:20.062919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.891 [2024-11-26 19:39:20.100810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.891 [2024-11-26 19:39:20.133553] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:25.149  [2024-11-26T19:39:20.396Z] Copying: 56/56 [kB] (average 9333 kBps) 00:05:25.149 00:05:25.149 19:39:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:25.149 19:39:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:05:25.149 19:39:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:25.149 19:39:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:25.149 19:39:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:05:25.149 19:39:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:25.149 19:39:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:25.149 19:39:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:25.149 19:39:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:25.149 19:39:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:25.149 19:39:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:25.149 [2024-11-26 19:39:20.384942] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:25.149 [2024-11-26 19:39:20.385004] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59027 ] 00:05:25.406 { 00:05:25.406 "subsystems": [ 00:05:25.406 { 00:05:25.406 "subsystem": "bdev", 00:05:25.406 "config": [ 00:05:25.406 { 00:05:25.406 "params": { 00:05:25.406 "trtype": "pcie", 00:05:25.406 "traddr": "0000:00:10.0", 00:05:25.406 "name": "Nvme0" 00:05:25.406 }, 00:05:25.406 "method": "bdev_nvme_attach_controller" 00:05:25.406 }, 00:05:25.406 { 00:05:25.406 "method": "bdev_wait_for_examine" 00:05:25.406 } 00:05:25.406 ] 00:05:25.406 } 00:05:25.406 ] 00:05:25.406 } 00:05:25.406 [2024-11-26 19:39:20.522240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.406 [2024-11-26 19:39:20.558792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.406 [2024-11-26 19:39:20.590420] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:25.664  [2024-11-26T19:39:20.911Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:25.664 00:05:25.664 19:39:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:25.664 19:39:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:05:25.664 19:39:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:05:25.664 19:39:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:05:25.664 19:39:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:05:25.664 19:39:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:25.664 19:39:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:26.230 19:39:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:05:26.230 19:39:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:26.230 19:39:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:26.230 19:39:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:26.230 { 00:05:26.230 "subsystems": [ 00:05:26.230 { 00:05:26.230 "subsystem": "bdev", 00:05:26.230 "config": [ 00:05:26.230 { 00:05:26.230 "params": { 00:05:26.230 "trtype": "pcie", 00:05:26.230 "traddr": "0000:00:10.0", 00:05:26.230 "name": "Nvme0" 00:05:26.230 }, 00:05:26.230 "method": "bdev_nvme_attach_controller" 00:05:26.230 }, 00:05:26.230 { 00:05:26.230 "method": "bdev_wait_for_examine" 00:05:26.230 } 00:05:26.230 ] 00:05:26.230 } 00:05:26.230 ] 00:05:26.230 } 00:05:26.230 [2024-11-26 19:39:21.257542] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:26.230 [2024-11-26 19:39:21.257612] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59046 ] 00:05:26.230 [2024-11-26 19:39:21.399266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.230 [2024-11-26 19:39:21.436254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.230 [2024-11-26 19:39:21.468799] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:26.488  [2024-11-26T19:39:21.735Z] Copying: 56/56 [kB] (average 54 MBps) 00:05:26.488 00:05:26.488 19:39:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:05:26.488 19:39:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:26.488 19:39:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:26.488 19:39:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:26.488 { 00:05:26.488 "subsystems": [ 00:05:26.488 { 00:05:26.488 "subsystem": "bdev", 00:05:26.488 "config": [ 00:05:26.488 { 00:05:26.488 "params": { 00:05:26.488 "trtype": "pcie", 00:05:26.488 "traddr": "0000:00:10.0", 00:05:26.488 "name": "Nvme0" 00:05:26.488 }, 00:05:26.488 "method": "bdev_nvme_attach_controller" 00:05:26.488 }, 00:05:26.488 { 00:05:26.488 "method": "bdev_wait_for_examine" 00:05:26.488 } 00:05:26.488 ] 00:05:26.488 } 00:05:26.488 ] 00:05:26.488 } 00:05:26.488 [2024-11-26 19:39:21.715403] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:26.488 [2024-11-26 19:39:21.715465] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59054 ] 00:05:26.747 [2024-11-26 19:39:21.854581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.747 [2024-11-26 19:39:21.892893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.747 [2024-11-26 19:39:21.927311] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:27.005  [2024-11-26T19:39:22.252Z] Copying: 56/56 [kB] (average 54 MBps) 00:05:27.005 00:05:27.005 19:39:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:27.005 19:39:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:05:27.005 19:39:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:27.005 19:39:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:27.005 19:39:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:05:27.005 19:39:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:27.005 19:39:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:27.005 19:39:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:27.005 19:39:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:27.005 19:39:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:27.005 19:39:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:27.006 [2024-11-26 19:39:22.170098] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:27.006 [2024-11-26 19:39:22.170162] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59070 ] 00:05:27.006 { 00:05:27.006 "subsystems": [ 00:05:27.006 { 00:05:27.006 "subsystem": "bdev", 00:05:27.006 "config": [ 00:05:27.006 { 00:05:27.006 "params": { 00:05:27.006 "trtype": "pcie", 00:05:27.006 "traddr": "0000:00:10.0", 00:05:27.006 "name": "Nvme0" 00:05:27.006 }, 00:05:27.006 "method": "bdev_nvme_attach_controller" 00:05:27.006 }, 00:05:27.006 { 00:05:27.006 "method": "bdev_wait_for_examine" 00:05:27.006 } 00:05:27.006 ] 00:05:27.006 } 00:05:27.006 ] 00:05:27.006 } 00:05:27.265 [2024-11-26 19:39:22.305081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.265 [2024-11-26 19:39:22.340436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.265 [2024-11-26 19:39:22.371202] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:27.265  [2024-11-26T19:39:22.770Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:27.523 00:05:27.523 19:39:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:27.523 19:39:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:27.523 19:39:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:05:27.523 19:39:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:05:27.523 19:39:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:05:27.523 19:39:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:05:27.523 19:39:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:27.523 19:39:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:27.782 19:39:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:05:27.782 19:39:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:27.782 19:39:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:27.782 19:39:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:27.782 { 00:05:27.782 "subsystems": [ 00:05:27.782 { 00:05:27.782 "subsystem": "bdev", 00:05:27.782 "config": [ 00:05:27.782 { 00:05:27.782 "params": { 00:05:27.782 "trtype": "pcie", 00:05:27.782 "traddr": "0000:00:10.0", 00:05:27.782 "name": "Nvme0" 00:05:27.782 }, 00:05:27.782 "method": "bdev_nvme_attach_controller" 00:05:27.782 }, 00:05:27.782 { 00:05:27.782 "method": "bdev_wait_for_examine" 00:05:27.782 } 00:05:27.782 ] 00:05:27.782 } 00:05:27.782 ] 00:05:27.782 } 00:05:27.782 [2024-11-26 19:39:23.004994] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:27.782 [2024-11-26 19:39:23.005067] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59089 ] 00:05:28.040 [2024-11-26 19:39:23.147220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.040 [2024-11-26 19:39:23.180739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.040 [2024-11-26 19:39:23.211178] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:28.299  [2024-11-26T19:39:23.546Z] Copying: 48/48 [kB] (average 46 MBps) 00:05:28.299 00:05:28.299 19:39:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:05:28.299 19:39:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:28.299 19:39:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:28.299 19:39:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:28.299 [2024-11-26 19:39:23.458555] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:28.299 [2024-11-26 19:39:23.458617] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59102 ] 00:05:28.299 { 00:05:28.299 "subsystems": [ 00:05:28.299 { 00:05:28.299 "subsystem": "bdev", 00:05:28.299 "config": [ 00:05:28.299 { 00:05:28.299 "params": { 00:05:28.299 "trtype": "pcie", 00:05:28.299 "traddr": "0000:00:10.0", 00:05:28.299 "name": "Nvme0" 00:05:28.299 }, 00:05:28.299 "method": "bdev_nvme_attach_controller" 00:05:28.299 }, 00:05:28.299 { 00:05:28.299 "method": "bdev_wait_for_examine" 00:05:28.299 } 00:05:28.299 ] 00:05:28.299 } 00:05:28.299 ] 00:05:28.299 } 00:05:28.557 [2024-11-26 19:39:23.598335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.557 [2024-11-26 19:39:23.637704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.557 [2024-11-26 19:39:23.671269] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:28.557  [2024-11-26T19:39:24.062Z] Copying: 48/48 [kB] (average 46 MBps) 00:05:28.815 00:05:28.815 19:39:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:28.815 19:39:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:05:28.815 19:39:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:28.815 19:39:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:28.815 19:39:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:05:28.815 19:39:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:28.815 19:39:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:28.815 19:39:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:28.815 19:39:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:28.815 19:39:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:28.815 19:39:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:28.815 [2024-11-26 19:39:23.925798] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:28.815 [2024-11-26 19:39:23.925879] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59118 ] 00:05:28.815 { 00:05:28.815 "subsystems": [ 00:05:28.815 { 00:05:28.815 "subsystem": "bdev", 00:05:28.815 "config": [ 00:05:28.815 { 00:05:28.815 "params": { 00:05:28.815 "trtype": "pcie", 00:05:28.815 "traddr": "0000:00:10.0", 00:05:28.815 "name": "Nvme0" 00:05:28.815 }, 00:05:28.815 "method": "bdev_nvme_attach_controller" 00:05:28.815 }, 00:05:28.815 { 00:05:28.815 "method": "bdev_wait_for_examine" 00:05:28.815 } 00:05:28.815 ] 00:05:28.815 } 00:05:28.815 ] 00:05:28.815 } 00:05:29.072 [2024-11-26 19:39:24.065952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.072 [2024-11-26 19:39:24.108551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.072 [2024-11-26 19:39:24.142180] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:29.072  [2024-11-26T19:39:24.576Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:29.329 00:05:29.329 19:39:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:29.329 19:39:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:05:29.329 19:39:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:05:29.329 19:39:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:05:29.329 19:39:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:05:29.329 19:39:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:29.329 19:39:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:29.587 19:39:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:05:29.587 19:39:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:29.587 19:39:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:29.587 19:39:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:29.587 [2024-11-26 19:39:24.793331] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:29.587 [2024-11-26 19:39:24.793557] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59131 ] 00:05:29.587 { 00:05:29.587 "subsystems": [ 00:05:29.587 { 00:05:29.587 "subsystem": "bdev", 00:05:29.587 "config": [ 00:05:29.587 { 00:05:29.587 "params": { 00:05:29.587 "trtype": "pcie", 00:05:29.587 "traddr": "0000:00:10.0", 00:05:29.587 "name": "Nvme0" 00:05:29.587 }, 00:05:29.587 "method": "bdev_nvme_attach_controller" 00:05:29.587 }, 00:05:29.587 { 00:05:29.587 "method": "bdev_wait_for_examine" 00:05:29.587 } 00:05:29.587 ] 00:05:29.587 } 00:05:29.587 ] 00:05:29.587 } 00:05:29.845 [2024-11-26 19:39:24.933494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.845 [2024-11-26 19:39:24.975156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.845 [2024-11-26 19:39:25.012583] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:30.104  [2024-11-26T19:39:25.351Z] Copying: 48/48 [kB] (average 46 MBps) 00:05:30.104 00:05:30.104 19:39:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:30.104 19:39:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:30.104 19:39:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:05:30.104 19:39:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:30.104 [2024-11-26 19:39:25.272816] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:30.104 { 00:05:30.104 "subsystems": [ 00:05:30.104 { 00:05:30.104 "subsystem": "bdev", 00:05:30.104 "config": [ 00:05:30.104 { 00:05:30.104 "params": { 00:05:30.104 "trtype": "pcie", 00:05:30.104 "traddr": "0000:00:10.0", 00:05:30.104 "name": "Nvme0" 00:05:30.104 }, 00:05:30.104 "method": "bdev_nvme_attach_controller" 00:05:30.104 }, 00:05:30.104 { 00:05:30.104 "method": "bdev_wait_for_examine" 00:05:30.104 } 00:05:30.104 ] 00:05:30.104 } 00:05:30.104 ] 00:05:30.104 } 00:05:30.104 [2024-11-26 19:39:25.272884] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59145 ] 00:05:30.362 [2024-11-26 19:39:25.410411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.362 [2024-11-26 19:39:25.454744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.362 [2024-11-26 19:39:25.490135] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:30.362  [2024-11-26T19:39:25.867Z] Copying: 48/48 [kB] (average 46 MBps) 00:05:30.620 00:05:30.620 19:39:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:30.620 19:39:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:05:30.620 19:39:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:30.620 19:39:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:30.620 19:39:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:05:30.620 19:39:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:30.620 19:39:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:30.620 19:39:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:30.620 19:39:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:30.620 19:39:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:30.620 19:39:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:30.620 [2024-11-26 19:39:25.742470] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:30.620 [2024-11-26 19:39:25.742567] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59160 ] 00:05:30.620 { 00:05:30.620 "subsystems": [ 00:05:30.620 { 00:05:30.620 "subsystem": "bdev", 00:05:30.620 "config": [ 00:05:30.620 { 00:05:30.620 "params": { 00:05:30.620 "trtype": "pcie", 00:05:30.620 "traddr": "0000:00:10.0", 00:05:30.620 "name": "Nvme0" 00:05:30.620 }, 00:05:30.620 "method": "bdev_nvme_attach_controller" 00:05:30.620 }, 00:05:30.620 { 00:05:30.620 "method": "bdev_wait_for_examine" 00:05:30.620 } 00:05:30.620 ] 00:05:30.620 } 00:05:30.620 ] 00:05:30.620 } 00:05:30.927 [2024-11-26 19:39:25.882209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.927 [2024-11-26 19:39:25.923738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.927 [2024-11-26 19:39:25.959972] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:30.927  [2024-11-26T19:39:26.432Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:31.185 00:05:31.185 ************************************ 00:05:31.185 END TEST dd_rw 00:05:31.185 ************************************ 00:05:31.185 00:05:31.185 real 0m10.793s 00:05:31.185 user 0m7.685s 00:05:31.185 sys 0m3.396s 00:05:31.185 19:39:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.185 19:39:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:31.185 19:39:26 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:05:31.185 19:39:26 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.185 19:39:26 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.185 19:39:26 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:31.185 ************************************ 00:05:31.185 START TEST dd_rw_offset 00:05:31.185 ************************************ 00:05:31.185 19:39:26 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1129 -- # basic_offset 00:05:31.185 19:39:26 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:05:31.185 19:39:26 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:05:31.185 19:39:26 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:05:31.185 19:39:26 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:31.185 19:39:26 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:05:31.185 19:39:26 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=ibg9rl42vhn6cyvrefwc6wprext3xnttu4k4wss2esf64bx1wj63h78qylz05ijamwckwkkl90utrwiwv3hz9ve8xkp1zdadtcyvt4loapwos0jpda6acm79xlnnzn71msb3mx8ikel0kjb3mej066bywpq7y0kb6glyky51j4h341bwm5smomu8gtecbreb85vmt1ka1niz3jna6it4x34j3ngyi8akrckw4cu7r47hs9m3lblfqmtdl0g1laowpoojihfob9einvcjs24g1c20lhtjhgkg42ux7hlvl34t3ktrol5s4rtoar6m4tenanykfpavf5v2l3ucuuespdck5x4p2o7ac0a4xc8xn78326egdfyy0wil6s4mlda3qnqr41rl8jfyvmnt4ld91u7n5mw6ncsftmm8g4fn7rloxc24sniknd8wq7v4odyermm4ll33a5nsmq19o5ywqf82juhfth4v5psifubg3xemy9z06w1v93asnchoxn5sl93648268gqsu7xgi9w70xst1op2sb3ckaequasrbudch55ivmd1vb4yxj1221bin66tlxljwn7ebvs8i7bjry10qdgsh820o9fmmzqajhve1yqowheemtdz4g1ymzsuxupfn9bnjispus8itf72tu3w76ux1wacn8z232a5kvoh9jwmszp549hpo8b9rk6kw9ysxm0epb1ip77uvwfoulv5qihm5k262fdu7o4a2pkbz9rj9m0q3gkek7jweyq79xahj0o3lx6e1nh7f40z6fltat3rzuqy1e1fkewqa28pa9wdw8a70eaevtfpsuj0wihhh1bcxj6okn4f9tye23k26nni1jrt9u6l2ixnxp3oi2yj6yemn01er5xgekuwrrsxljplbsohd91en4jg9xnh0mwscbsq9a77w46vg0qrak8mm60y1hvvfs2grmby6qivpovjg9ylucg8xpaht963yndtp18m40dnygkfjk814pyrn98n36c1fwbycrekbskslv48fjeel5ah478aehqgrk56tqu3251idpx4gp4xc3eumchd82ofx2tg8cu696m5yhpvesf9uc6c2x72a81sxu5zl0qtbzr0qpmc6lwq0mq7i4zbv5k6dxklc2g0tey40sy3e6wqbtjx9ypk6wkp9s4eccuds50b8zkdjfrsdevi3o7ys28lfx6qoq6dfhdjy5suv0au6if7u617nifzamo3027qq884ji7j37ujks2l9krs3eaogmtxlf3re8c5pb5zav9cojmg5qsu5r3txj8t7c8cvvdiulk3g8mwp8qlqibr4tmlsh17cr8ld0bx501r88k8qwl209joedlfvfp7ugwjq4evgmnqkiof9le7t8o1txgo5nwgi4l91tc2e41cjms18hkvuj67tfgel73zbku46zwnikoi2voc0chrvo7rtm31hj2c846yisiwl4th6t8d60lpzt3dr8vafq3wrcggm5rmtlh2y4v5ztkmeyw3gwczbccwe5pzps09sszu6f9uoifwinj0zjttxe0rh2l3pba66v66fwx1kpzwooui3om52ngvam7httpxbzfrz7m2452j2pv2ww1ktrl1pm2gs6c17dxc38u68idrl2vhjz6nf29ccky2fi0jajon17u74phypbuwixwnh0pc1u2cbj49pw1a0prdgpk8c7sfwhvlu4ut2svrmigrp3m4sozq48fips1f0e527f4z7gr6regnca016uv8gm94767uxx930vlfgs6w3wpy37f32fu53w3y3ctrieft1urtwwnabmygc2951enfioccw967jbawqfyr1h2lhl11kc1gvqgyv64nhbxwsua4uqjduvk38ng68cnces34o9ikpsm4p9pehm214ay5qagi13odg6lkzjaxjlcfhrnah3niad7bwvchgqip3iiq2pr4shytvebc9ccrg6bgitloflksgms0n436eylj5qn972fnwsqbvkcnmj4b307tz2dulu8gabypgrxajhbk5belcslp9z72xdxx5z8a3fd08kv2flq0wclupu3cvnouzwurv5h92cogqy3we6wf69o6jhxt85cdrojavvpkw3fy0zaxa9uf7c8wdjfo15phuovwc8uwia7xww7rdc91bdfdq9o7mwdbj088dgbegdnu5bq914osxwi4bsfquzis2g39mene96dos911toelp73nuv6k8g9ko9p7e9zghq6cicqgdoeb3pwctiq1p5p59kl3xrjenbxqn8p2y4po0lyr90h2svuqrfhivwf0n2ae6nhsrngx5gq5xjs8br86wp6hz406jcswjkgteuwn35hc2d98fweilp9cjem55oqqm5zvg9rslewh5vyo2oawsh5zzuv3u7bxilqigp9x9u7h7whfq3708abx3dv5dnuad9brc41p62g708h7on9q4utuc81uhmotwdd2yxppkac72w828ovii1ota3j0wzjg7ol7biiqsrfe9232ldfz6tgmganbosk6vgyj4bmtawbz4nm5osdk1uy90cxc7mao1ytd2amyui32cbky45mdqoph5b0q9jfsf59d0tv3we1b2m5ez9v4u9grk645s8lxjdwarh4lgzavvo26u76j5tsycmxiuqhcy1ic5ygh2ldug9eac25f3v3lzsxs79jcr585ri0r1hyan7rkm3mmtmr48d9bo4fsh3j45c2lp7xes0zaivcck9vdcb3r6sstupb3eyux85flygb5rwfqf2f8mjw2d2q4v9ilh0mz902tb7ql2opmr4ipdnbrm783hyatqgiy904c6rz1gg24fl635kwg6i4clq9mmtkhz2xix7orc517ht23489r97d5hykaqv5dur0gbo1p21lz0xyket8z9wz43idzsvyhvfliqduhhe7lf7kdysbcca3pkqkyl7t7ristgp0pt90b1im9els38c2yrg8sgbg092cv1hyvjezsq7oj6llwiw3vio2bcyw4i8q3cwcgn7u3afbymchx5hdpvkxvd0yw3kyl6hyo0w0tgjbs4tghwguldkx3z9dwysjxkfz24zw1w5u2uff6d259zmpopryscflqk6mxpzn59quoueh06dk41kd2oy771j18tzqjqdtx6n6aorcvxo7pw4bchdt6pxdsnln4f6914id1fx11kvl3xnut4ihc7bkdr8rybfsedomc10x86nuxxy9fo0knm9wgjhq67t4nvmx37gnm7lc35h3468oya2jxaeaam5zimk1yc7kqn1bbwksxspdwznndx7ygehnlny0gew9jcityfo7yyzhzbts3wn6259h02dld2kcy8at2qwecpsp816py9sq2oecchtt3amlcjqky9ebpx4ue5ud7l0h6jtu2i7yklcn4jazhiicyvuak9p8017rfc1ffmyiye8ey3pzhe461vndwmei8o91vjw8yj13o5gznm5b23g0s566kcugedi28vx518iu6di3n22fymbavz2b39sghb8q51zgd7xz1d746cnv2juemznxiylgmjdafzn31sranw4bp3l4lv0lwok91ersmicb8l5fjs0hjrz8fselyt7tq2orrmazrkbsgyfi8dg2b1hec454g29k17qlt3s2acrzje4d34rtjpi3y0ndcahch3bth0jcxd0ezzfhkunyymazok992ve49xrlvbpyvazdkvorzhf78et1m85sfy4ghcjkz3p0ztv40i16vj39cu3q3xb6l278s9ssfuplu4heq0jlwk4nwj6ae6g4mjhoo4mkgsn8gb63210d3i9qlilaor4cpeyahy8arp5ek9ba0soqw6gjmh3tzkhyilqfj350ve42j37ikyjg1x5fk0ua34hrdu0rdduy6r54nx8g48wfwzbwtesazfqf02rnw62z381ym6budb0pcyzsa7m6gld7lm5mggz6yislmuhue4uf2ere7lu7jweu6ibyrs2lagt1unhlw0kpfqtfxkecmr6erac3vsl70kd9rzwusbm1ziildxb25y27qkbutxeoa 00:05:31.185 19:39:26 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:05:31.185 19:39:26 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:05:31.185 19:39:26 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:05:31.185 19:39:26 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:31.185 [2024-11-26 19:39:26.331593] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:31.185 [2024-11-26 19:39:26.331664] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59191 ] 00:05:31.185 { 00:05:31.185 "subsystems": [ 00:05:31.185 { 00:05:31.185 "subsystem": "bdev", 00:05:31.185 "config": [ 00:05:31.185 { 00:05:31.185 "params": { 00:05:31.185 "trtype": "pcie", 00:05:31.185 "traddr": "0000:00:10.0", 00:05:31.185 "name": "Nvme0" 00:05:31.185 }, 00:05:31.185 "method": "bdev_nvme_attach_controller" 00:05:31.185 }, 00:05:31.185 { 00:05:31.185 "method": "bdev_wait_for_examine" 00:05:31.185 } 00:05:31.185 ] 00:05:31.185 } 00:05:31.185 ] 00:05:31.185 } 00:05:31.442 [2024-11-26 19:39:26.473408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.442 [2024-11-26 19:39:26.512275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.442 [2024-11-26 19:39:26.546099] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:31.442  [2024-11-26T19:39:26.947Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:05:31.700 00:05:31.700 19:39:26 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:05:31.700 19:39:26 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:05:31.700 19:39:26 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:05:31.700 19:39:26 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:31.700 [2024-11-26 19:39:26.795271] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:31.700 [2024-11-26 19:39:26.795330] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59204 ] 00:05:31.700 { 00:05:31.700 "subsystems": [ 00:05:31.700 { 00:05:31.700 "subsystem": "bdev", 00:05:31.700 "config": [ 00:05:31.700 { 00:05:31.700 "params": { 00:05:31.700 "trtype": "pcie", 00:05:31.700 "traddr": "0000:00:10.0", 00:05:31.700 "name": "Nvme0" 00:05:31.700 }, 00:05:31.700 "method": "bdev_nvme_attach_controller" 00:05:31.700 }, 00:05:31.700 { 00:05:31.700 "method": "bdev_wait_for_examine" 00:05:31.700 } 00:05:31.700 ] 00:05:31.700 } 00:05:31.700 ] 00:05:31.700 } 00:05:31.700 [2024-11-26 19:39:26.930066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.956 [2024-11-26 19:39:26.973324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.956 [2024-11-26 19:39:27.008271] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:31.956  [2024-11-26T19:39:27.460Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:05:32.213 00:05:32.213 19:39:27 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:05:32.214 19:39:27 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ ibg9rl42vhn6cyvrefwc6wprext3xnttu4k4wss2esf64bx1wj63h78qylz05ijamwckwkkl90utrwiwv3hz9ve8xkp1zdadtcyvt4loapwos0jpda6acm79xlnnzn71msb3mx8ikel0kjb3mej066bywpq7y0kb6glyky51j4h341bwm5smomu8gtecbreb85vmt1ka1niz3jna6it4x34j3ngyi8akrckw4cu7r47hs9m3lblfqmtdl0g1laowpoojihfob9einvcjs24g1c20lhtjhgkg42ux7hlvl34t3ktrol5s4rtoar6m4tenanykfpavf5v2l3ucuuespdck5x4p2o7ac0a4xc8xn78326egdfyy0wil6s4mlda3qnqr41rl8jfyvmnt4ld91u7n5mw6ncsftmm8g4fn7rloxc24sniknd8wq7v4odyermm4ll33a5nsmq19o5ywqf82juhfth4v5psifubg3xemy9z06w1v93asnchoxn5sl93648268gqsu7xgi9w70xst1op2sb3ckaequasrbudch55ivmd1vb4yxj1221bin66tlxljwn7ebvs8i7bjry10qdgsh820o9fmmzqajhve1yqowheemtdz4g1ymzsuxupfn9bnjispus8itf72tu3w76ux1wacn8z232a5kvoh9jwmszp549hpo8b9rk6kw9ysxm0epb1ip77uvwfoulv5qihm5k262fdu7o4a2pkbz9rj9m0q3gkek7jweyq79xahj0o3lx6e1nh7f40z6fltat3rzuqy1e1fkewqa28pa9wdw8a70eaevtfpsuj0wihhh1bcxj6okn4f9tye23k26nni1jrt9u6l2ixnxp3oi2yj6yemn01er5xgekuwrrsxljplbsohd91en4jg9xnh0mwscbsq9a77w46vg0qrak8mm60y1hvvfs2grmby6qivpovjg9ylucg8xpaht963yndtp18m40dnygkfjk814pyrn98n36c1fwbycrekbskslv48fjeel5ah478aehqgrk56tqu3251idpx4gp4xc3eumchd82ofx2tg8cu696m5yhpvesf9uc6c2x72a81sxu5zl0qtbzr0qpmc6lwq0mq7i4zbv5k6dxklc2g0tey40sy3e6wqbtjx9ypk6wkp9s4eccuds50b8zkdjfrsdevi3o7ys28lfx6qoq6dfhdjy5suv0au6if7u617nifzamo3027qq884ji7j37ujks2l9krs3eaogmtxlf3re8c5pb5zav9cojmg5qsu5r3txj8t7c8cvvdiulk3g8mwp8qlqibr4tmlsh17cr8ld0bx501r88k8qwl209joedlfvfp7ugwjq4evgmnqkiof9le7t8o1txgo5nwgi4l91tc2e41cjms18hkvuj67tfgel73zbku46zwnikoi2voc0chrvo7rtm31hj2c846yisiwl4th6t8d60lpzt3dr8vafq3wrcggm5rmtlh2y4v5ztkmeyw3gwczbccwe5pzps09sszu6f9uoifwinj0zjttxe0rh2l3pba66v66fwx1kpzwooui3om52ngvam7httpxbzfrz7m2452j2pv2ww1ktrl1pm2gs6c17dxc38u68idrl2vhjz6nf29ccky2fi0jajon17u74phypbuwixwnh0pc1u2cbj49pw1a0prdgpk8c7sfwhvlu4ut2svrmigrp3m4sozq48fips1f0e527f4z7gr6regnca016uv8gm94767uxx930vlfgs6w3wpy37f32fu53w3y3ctrieft1urtwwnabmygc2951enfioccw967jbawqfyr1h2lhl11kc1gvqgyv64nhbxwsua4uqjduvk38ng68cnces34o9ikpsm4p9pehm214ay5qagi13odg6lkzjaxjlcfhrnah3niad7bwvchgqip3iiq2pr4shytvebc9ccrg6bgitloflksgms0n436eylj5qn972fnwsqbvkcnmj4b307tz2dulu8gabypgrxajhbk5belcslp9z72xdxx5z8a3fd08kv2flq0wclupu3cvnouzwurv5h92cogqy3we6wf69o6jhxt85cdrojavvpkw3fy0zaxa9uf7c8wdjfo15phuovwc8uwia7xww7rdc91bdfdq9o7mwdbj088dgbegdnu5bq914osxwi4bsfquzis2g39mene96dos911toelp73nuv6k8g9ko9p7e9zghq6cicqgdoeb3pwctiq1p5p59kl3xrjenbxqn8p2y4po0lyr90h2svuqrfhivwf0n2ae6nhsrngx5gq5xjs8br86wp6hz406jcswjkgteuwn35hc2d98fweilp9cjem55oqqm5zvg9rslewh5vyo2oawsh5zzuv3u7bxilqigp9x9u7h7whfq3708abx3dv5dnuad9brc41p62g708h7on9q4utuc81uhmotwdd2yxppkac72w828ovii1ota3j0wzjg7ol7biiqsrfe9232ldfz6tgmganbosk6vgyj4bmtawbz4nm5osdk1uy90cxc7mao1ytd2amyui32cbky45mdqoph5b0q9jfsf59d0tv3we1b2m5ez9v4u9grk645s8lxjdwarh4lgzavvo26u76j5tsycmxiuqhcy1ic5ygh2ldug9eac25f3v3lzsxs79jcr585ri0r1hyan7rkm3mmtmr48d9bo4fsh3j45c2lp7xes0zaivcck9vdcb3r6sstupb3eyux85flygb5rwfqf2f8mjw2d2q4v9ilh0mz902tb7ql2opmr4ipdnbrm783hyatqgiy904c6rz1gg24fl635kwg6i4clq9mmtkhz2xix7orc517ht23489r97d5hykaqv5dur0gbo1p21lz0xyket8z9wz43idzsvyhvfliqduhhe7lf7kdysbcca3pkqkyl7t7ristgp0pt90b1im9els38c2yrg8sgbg092cv1hyvjezsq7oj6llwiw3vio2bcyw4i8q3cwcgn7u3afbymchx5hdpvkxvd0yw3kyl6hyo0w0tgjbs4tghwguldkx3z9dwysjxkfz24zw1w5u2uff6d259zmpopryscflqk6mxpzn59quoueh06dk41kd2oy771j18tzqjqdtx6n6aorcvxo7pw4bchdt6pxdsnln4f6914id1fx11kvl3xnut4ihc7bkdr8rybfsedomc10x86nuxxy9fo0knm9wgjhq67t4nvmx37gnm7lc35h3468oya2jxaeaam5zimk1yc7kqn1bbwksxspdwznndx7ygehnlny0gew9jcityfo7yyzhzbts3wn6259h02dld2kcy8at2qwecpsp816py9sq2oecchtt3amlcjqky9ebpx4ue5ud7l0h6jtu2i7yklcn4jazhiicyvuak9p8017rfc1ffmyiye8ey3pzhe461vndwmei8o91vjw8yj13o5gznm5b23g0s566kcugedi28vx518iu6di3n22fymbavz2b39sghb8q51zgd7xz1d746cnv2juemznxiylgmjdafzn31sranw4bp3l4lv0lwok91ersmicb8l5fjs0hjrz8fselyt7tq2orrmazrkbsgyfi8dg2b1hec454g29k17qlt3s2acrzje4d34rtjpi3y0ndcahch3bth0jcxd0ezzfhkunyymazok992ve49xrlvbpyvazdkvorzhf78et1m85sfy4ghcjkz3p0ztv40i16vj39cu3q3xb6l278s9ssfuplu4heq0jlwk4nwj6ae6g4mjhoo4mkgsn8gb63210d3i9qlilaor4cpeyahy8arp5ek9ba0soqw6gjmh3tzkhyilqfj350ve42j37ikyjg1x5fk0ua34hrdu0rdduy6r54nx8g48wfwzbwtesazfqf02rnw62z381ym6budb0pcyzsa7m6gld7lm5mggz6yislmuhue4uf2ere7lu7jweu6ibyrs2lagt1unhlw0kpfqtfxkecmr6erac3vsl70kd9rzwusbm1ziildxb25y27qkbutxeoa == \i\b\g\9\r\l\4\2\v\h\n\6\c\y\v\r\e\f\w\c\6\w\p\r\e\x\t\3\x\n\t\t\u\4\k\4\w\s\s\2\e\s\f\6\4\b\x\1\w\j\6\3\h\7\8\q\y\l\z\0\5\i\j\a\m\w\c\k\w\k\k\l\9\0\u\t\r\w\i\w\v\3\h\z\9\v\e\8\x\k\p\1\z\d\a\d\t\c\y\v\t\4\l\o\a\p\w\o\s\0\j\p\d\a\6\a\c\m\7\9\x\l\n\n\z\n\7\1\m\s\b\3\m\x\8\i\k\e\l\0\k\j\b\3\m\e\j\0\6\6\b\y\w\p\q\7\y\0\k\b\6\g\l\y\k\y\5\1\j\4\h\3\4\1\b\w\m\5\s\m\o\m\u\8\g\t\e\c\b\r\e\b\8\5\v\m\t\1\k\a\1\n\i\z\3\j\n\a\6\i\t\4\x\3\4\j\3\n\g\y\i\8\a\k\r\c\k\w\4\c\u\7\r\4\7\h\s\9\m\3\l\b\l\f\q\m\t\d\l\0\g\1\l\a\o\w\p\o\o\j\i\h\f\o\b\9\e\i\n\v\c\j\s\2\4\g\1\c\2\0\l\h\t\j\h\g\k\g\4\2\u\x\7\h\l\v\l\3\4\t\3\k\t\r\o\l\5\s\4\r\t\o\a\r\6\m\4\t\e\n\a\n\y\k\f\p\a\v\f\5\v\2\l\3\u\c\u\u\e\s\p\d\c\k\5\x\4\p\2\o\7\a\c\0\a\4\x\c\8\x\n\7\8\3\2\6\e\g\d\f\y\y\0\w\i\l\6\s\4\m\l\d\a\3\q\n\q\r\4\1\r\l\8\j\f\y\v\m\n\t\4\l\d\9\1\u\7\n\5\m\w\6\n\c\s\f\t\m\m\8\g\4\f\n\7\r\l\o\x\c\2\4\s\n\i\k\n\d\8\w\q\7\v\4\o\d\y\e\r\m\m\4\l\l\3\3\a\5\n\s\m\q\1\9\o\5\y\w\q\f\8\2\j\u\h\f\t\h\4\v\5\p\s\i\f\u\b\g\3\x\e\m\y\9\z\0\6\w\1\v\9\3\a\s\n\c\h\o\x\n\5\s\l\9\3\6\4\8\2\6\8\g\q\s\u\7\x\g\i\9\w\7\0\x\s\t\1\o\p\2\s\b\3\c\k\a\e\q\u\a\s\r\b\u\d\c\h\5\5\i\v\m\d\1\v\b\4\y\x\j\1\2\2\1\b\i\n\6\6\t\l\x\l\j\w\n\7\e\b\v\s\8\i\7\b\j\r\y\1\0\q\d\g\s\h\8\2\0\o\9\f\m\m\z\q\a\j\h\v\e\1\y\q\o\w\h\e\e\m\t\d\z\4\g\1\y\m\z\s\u\x\u\p\f\n\9\b\n\j\i\s\p\u\s\8\i\t\f\7\2\t\u\3\w\7\6\u\x\1\w\a\c\n\8\z\2\3\2\a\5\k\v\o\h\9\j\w\m\s\z\p\5\4\9\h\p\o\8\b\9\r\k\6\k\w\9\y\s\x\m\0\e\p\b\1\i\p\7\7\u\v\w\f\o\u\l\v\5\q\i\h\m\5\k\2\6\2\f\d\u\7\o\4\a\2\p\k\b\z\9\r\j\9\m\0\q\3\g\k\e\k\7\j\w\e\y\q\7\9\x\a\h\j\0\o\3\l\x\6\e\1\n\h\7\f\4\0\z\6\f\l\t\a\t\3\r\z\u\q\y\1\e\1\f\k\e\w\q\a\2\8\p\a\9\w\d\w\8\a\7\0\e\a\e\v\t\f\p\s\u\j\0\w\i\h\h\h\1\b\c\x\j\6\o\k\n\4\f\9\t\y\e\2\3\k\2\6\n\n\i\1\j\r\t\9\u\6\l\2\i\x\n\x\p\3\o\i\2\y\j\6\y\e\m\n\0\1\e\r\5\x\g\e\k\u\w\r\r\s\x\l\j\p\l\b\s\o\h\d\9\1\e\n\4\j\g\9\x\n\h\0\m\w\s\c\b\s\q\9\a\7\7\w\4\6\v\g\0\q\r\a\k\8\m\m\6\0\y\1\h\v\v\f\s\2\g\r\m\b\y\6\q\i\v\p\o\v\j\g\9\y\l\u\c\g\8\x\p\a\h\t\9\6\3\y\n\d\t\p\1\8\m\4\0\d\n\y\g\k\f\j\k\8\1\4\p\y\r\n\9\8\n\3\6\c\1\f\w\b\y\c\r\e\k\b\s\k\s\l\v\4\8\f\j\e\e\l\5\a\h\4\7\8\a\e\h\q\g\r\k\5\6\t\q\u\3\2\5\1\i\d\p\x\4\g\p\4\x\c\3\e\u\m\c\h\d\8\2\o\f\x\2\t\g\8\c\u\6\9\6\m\5\y\h\p\v\e\s\f\9\u\c\6\c\2\x\7\2\a\8\1\s\x\u\5\z\l\0\q\t\b\z\r\0\q\p\m\c\6\l\w\q\0\m\q\7\i\4\z\b\v\5\k\6\d\x\k\l\c\2\g\0\t\e\y\4\0\s\y\3\e\6\w\q\b\t\j\x\9\y\p\k\6\w\k\p\9\s\4\e\c\c\u\d\s\5\0\b\8\z\k\d\j\f\r\s\d\e\v\i\3\o\7\y\s\2\8\l\f\x\6\q\o\q\6\d\f\h\d\j\y\5\s\u\v\0\a\u\6\i\f\7\u\6\1\7\n\i\f\z\a\m\o\3\0\2\7\q\q\8\8\4\j\i\7\j\3\7\u\j\k\s\2\l\9\k\r\s\3\e\a\o\g\m\t\x\l\f\3\r\e\8\c\5\p\b\5\z\a\v\9\c\o\j\m\g\5\q\s\u\5\r\3\t\x\j\8\t\7\c\8\c\v\v\d\i\u\l\k\3\g\8\m\w\p\8\q\l\q\i\b\r\4\t\m\l\s\h\1\7\c\r\8\l\d\0\b\x\5\0\1\r\8\8\k\8\q\w\l\2\0\9\j\o\e\d\l\f\v\f\p\7\u\g\w\j\q\4\e\v\g\m\n\q\k\i\o\f\9\l\e\7\t\8\o\1\t\x\g\o\5\n\w\g\i\4\l\9\1\t\c\2\e\4\1\c\j\m\s\1\8\h\k\v\u\j\6\7\t\f\g\e\l\7\3\z\b\k\u\4\6\z\w\n\i\k\o\i\2\v\o\c\0\c\h\r\v\o\7\r\t\m\3\1\h\j\2\c\8\4\6\y\i\s\i\w\l\4\t\h\6\t\8\d\6\0\l\p\z\t\3\d\r\8\v\a\f\q\3\w\r\c\g\g\m\5\r\m\t\l\h\2\y\4\v\5\z\t\k\m\e\y\w\3\g\w\c\z\b\c\c\w\e\5\p\z\p\s\0\9\s\s\z\u\6\f\9\u\o\i\f\w\i\n\j\0\z\j\t\t\x\e\0\r\h\2\l\3\p\b\a\6\6\v\6\6\f\w\x\1\k\p\z\w\o\o\u\i\3\o\m\5\2\n\g\v\a\m\7\h\t\t\p\x\b\z\f\r\z\7\m\2\4\5\2\j\2\p\v\2\w\w\1\k\t\r\l\1\p\m\2\g\s\6\c\1\7\d\x\c\3\8\u\6\8\i\d\r\l\2\v\h\j\z\6\n\f\2\9\c\c\k\y\2\f\i\0\j\a\j\o\n\1\7\u\7\4\p\h\y\p\b\u\w\i\x\w\n\h\0\p\c\1\u\2\c\b\j\4\9\p\w\1\a\0\p\r\d\g\p\k\8\c\7\s\f\w\h\v\l\u\4\u\t\2\s\v\r\m\i\g\r\p\3\m\4\s\o\z\q\4\8\f\i\p\s\1\f\0\e\5\2\7\f\4\z\7\g\r\6\r\e\g\n\c\a\0\1\6\u\v\8\g\m\9\4\7\6\7\u\x\x\9\3\0\v\l\f\g\s\6\w\3\w\p\y\3\7\f\3\2\f\u\5\3\w\3\y\3\c\t\r\i\e\f\t\1\u\r\t\w\w\n\a\b\m\y\g\c\2\9\5\1\e\n\f\i\o\c\c\w\9\6\7\j\b\a\w\q\f\y\r\1\h\2\l\h\l\1\1\k\c\1\g\v\q\g\y\v\6\4\n\h\b\x\w\s\u\a\4\u\q\j\d\u\v\k\3\8\n\g\6\8\c\n\c\e\s\3\4\o\9\i\k\p\s\m\4\p\9\p\e\h\m\2\1\4\a\y\5\q\a\g\i\1\3\o\d\g\6\l\k\z\j\a\x\j\l\c\f\h\r\n\a\h\3\n\i\a\d\7\b\w\v\c\h\g\q\i\p\3\i\i\q\2\p\r\4\s\h\y\t\v\e\b\c\9\c\c\r\g\6\b\g\i\t\l\o\f\l\k\s\g\m\s\0\n\4\3\6\e\y\l\j\5\q\n\9\7\2\f\n\w\s\q\b\v\k\c\n\m\j\4\b\3\0\7\t\z\2\d\u\l\u\8\g\a\b\y\p\g\r\x\a\j\h\b\k\5\b\e\l\c\s\l\p\9\z\7\2\x\d\x\x\5\z\8\a\3\f\d\0\8\k\v\2\f\l\q\0\w\c\l\u\p\u\3\c\v\n\o\u\z\w\u\r\v\5\h\9\2\c\o\g\q\y\3\w\e\6\w\f\6\9\o\6\j\h\x\t\8\5\c\d\r\o\j\a\v\v\p\k\w\3\f\y\0\z\a\x\a\9\u\f\7\c\8\w\d\j\f\o\1\5\p\h\u\o\v\w\c\8\u\w\i\a\7\x\w\w\7\r\d\c\9\1\b\d\f\d\q\9\o\7\m\w\d\b\j\0\8\8\d\g\b\e\g\d\n\u\5\b\q\9\1\4\o\s\x\w\i\4\b\s\f\q\u\z\i\s\2\g\3\9\m\e\n\e\9\6\d\o\s\9\1\1\t\o\e\l\p\7\3\n\u\v\6\k\8\g\9\k\o\9\p\7\e\9\z\g\h\q\6\c\i\c\q\g\d\o\e\b\3\p\w\c\t\i\q\1\p\5\p\5\9\k\l\3\x\r\j\e\n\b\x\q\n\8\p\2\y\4\p\o\0\l\y\r\9\0\h\2\s\v\u\q\r\f\h\i\v\w\f\0\n\2\a\e\6\n\h\s\r\n\g\x\5\g\q\5\x\j\s\8\b\r\8\6\w\p\6\h\z\4\0\6\j\c\s\w\j\k\g\t\e\u\w\n\3\5\h\c\2\d\9\8\f\w\e\i\l\p\9\c\j\e\m\5\5\o\q\q\m\5\z\v\g\9\r\s\l\e\w\h\5\v\y\o\2\o\a\w\s\h\5\z\z\u\v\3\u\7\b\x\i\l\q\i\g\p\9\x\9\u\7\h\7\w\h\f\q\3\7\0\8\a\b\x\3\d\v\5\d\n\u\a\d\9\b\r\c\4\1\p\6\2\g\7\0\8\h\7\o\n\9\q\4\u\t\u\c\8\1\u\h\m\o\t\w\d\d\2\y\x\p\p\k\a\c\7\2\w\8\2\8\o\v\i\i\1\o\t\a\3\j\0\w\z\j\g\7\o\l\7\b\i\i\q\s\r\f\e\9\2\3\2\l\d\f\z\6\t\g\m\g\a\n\b\o\s\k\6\v\g\y\j\4\b\m\t\a\w\b\z\4\n\m\5\o\s\d\k\1\u\y\9\0\c\x\c\7\m\a\o\1\y\t\d\2\a\m\y\u\i\3\2\c\b\k\y\4\5\m\d\q\o\p\h\5\b\0\q\9\j\f\s\f\5\9\d\0\t\v\3\w\e\1\b\2\m\5\e\z\9\v\4\u\9\g\r\k\6\4\5\s\8\l\x\j\d\w\a\r\h\4\l\g\z\a\v\v\o\2\6\u\7\6\j\5\t\s\y\c\m\x\i\u\q\h\c\y\1\i\c\5\y\g\h\2\l\d\u\g\9\e\a\c\2\5\f\3\v\3\l\z\s\x\s\7\9\j\c\r\5\8\5\r\i\0\r\1\h\y\a\n\7\r\k\m\3\m\m\t\m\r\4\8\d\9\b\o\4\f\s\h\3\j\4\5\c\2\l\p\7\x\e\s\0\z\a\i\v\c\c\k\9\v\d\c\b\3\r\6\s\s\t\u\p\b\3\e\y\u\x\8\5\f\l\y\g\b\5\r\w\f\q\f\2\f\8\m\j\w\2\d\2\q\4\v\9\i\l\h\0\m\z\9\0\2\t\b\7\q\l\2\o\p\m\r\4\i\p\d\n\b\r\m\7\8\3\h\y\a\t\q\g\i\y\9\0\4\c\6\r\z\1\g\g\2\4\f\l\6\3\5\k\w\g\6\i\4\c\l\q\9\m\m\t\k\h\z\2\x\i\x\7\o\r\c\5\1\7\h\t\2\3\4\8\9\r\9\7\d\5\h\y\k\a\q\v\5\d\u\r\0\g\b\o\1\p\2\1\l\z\0\x\y\k\e\t\8\z\9\w\z\4\3\i\d\z\s\v\y\h\v\f\l\i\q\d\u\h\h\e\7\l\f\7\k\d\y\s\b\c\c\a\3\p\k\q\k\y\l\7\t\7\r\i\s\t\g\p\0\p\t\9\0\b\1\i\m\9\e\l\s\3\8\c\2\y\r\g\8\s\g\b\g\0\9\2\c\v\1\h\y\v\j\e\z\s\q\7\o\j\6\l\l\w\i\w\3\v\i\o\2\b\c\y\w\4\i\8\q\3\c\w\c\g\n\7\u\3\a\f\b\y\m\c\h\x\5\h\d\p\v\k\x\v\d\0\y\w\3\k\y\l\6\h\y\o\0\w\0\t\g\j\b\s\4\t\g\h\w\g\u\l\d\k\x\3\z\9\d\w\y\s\j\x\k\f\z\2\4\z\w\1\w\5\u\2\u\f\f\6\d\2\5\9\z\m\p\o\p\r\y\s\c\f\l\q\k\6\m\x\p\z\n\5\9\q\u\o\u\e\h\0\6\d\k\4\1\k\d\2\o\y\7\7\1\j\1\8\t\z\q\j\q\d\t\x\6\n\6\a\o\r\c\v\x\o\7\p\w\4\b\c\h\d\t\6\p\x\d\s\n\l\n\4\f\6\9\1\4\i\d\1\f\x\1\1\k\v\l\3\x\n\u\t\4\i\h\c\7\b\k\d\r\8\r\y\b\f\s\e\d\o\m\c\1\0\x\8\6\n\u\x\x\y\9\f\o\0\k\n\m\9\w\g\j\h\q\6\7\t\4\n\v\m\x\3\7\g\n\m\7\l\c\3\5\h\3\4\6\8\o\y\a\2\j\x\a\e\a\a\m\5\z\i\m\k\1\y\c\7\k\q\n\1\b\b\w\k\s\x\s\p\d\w\z\n\n\d\x\7\y\g\e\h\n\l\n\y\0\g\e\w\9\j\c\i\t\y\f\o\7\y\y\z\h\z\b\t\s\3\w\n\6\2\5\9\h\0\2\d\l\d\2\k\c\y\8\a\t\2\q\w\e\c\p\s\p\8\1\6\p\y\9\s\q\2\o\e\c\c\h\t\t\3\a\m\l\c\j\q\k\y\9\e\b\p\x\4\u\e\5\u\d\7\l\0\h\6\j\t\u\2\i\7\y\k\l\c\n\4\j\a\z\h\i\i\c\y\v\u\a\k\9\p\8\0\1\7\r\f\c\1\f\f\m\y\i\y\e\8\e\y\3\p\z\h\e\4\6\1\v\n\d\w\m\e\i\8\o\9\1\v\j\w\8\y\j\1\3\o\5\g\z\n\m\5\b\2\3\g\0\s\5\6\6\k\c\u\g\e\d\i\2\8\v\x\5\1\8\i\u\6\d\i\3\n\2\2\f\y\m\b\a\v\z\2\b\3\9\s\g\h\b\8\q\5\1\z\g\d\7\x\z\1\d\7\4\6\c\n\v\2\j\u\e\m\z\n\x\i\y\l\g\m\j\d\a\f\z\n\3\1\s\r\a\n\w\4\b\p\3\l\4\l\v\0\l\w\o\k\9\1\e\r\s\m\i\c\b\8\l\5\f\j\s\0\h\j\r\z\8\f\s\e\l\y\t\7\t\q\2\o\r\r\m\a\z\r\k\b\s\g\y\f\i\8\d\g\2\b\1\h\e\c\4\5\4\g\2\9\k\1\7\q\l\t\3\s\2\a\c\r\z\j\e\4\d\3\4\r\t\j\p\i\3\y\0\n\d\c\a\h\c\h\3\b\t\h\0\j\c\x\d\0\e\z\z\f\h\k\u\n\y\y\m\a\z\o\k\9\9\2\v\e\4\9\x\r\l\v\b\p\y\v\a\z\d\k\v\o\r\z\h\f\7\8\e\t\1\m\8\5\s\f\y\4\g\h\c\j\k\z\3\p\0\z\t\v\4\0\i\1\6\v\j\3\9\c\u\3\q\3\x\b\6\l\2\7\8\s\9\s\s\f\u\p\l\u\4\h\e\q\0\j\l\w\k\4\n\w\j\6\a\e\6\g\4\m\j\h\o\o\4\m\k\g\s\n\8\g\b\6\3\2\1\0\d\3\i\9\q\l\i\l\a\o\r\4\c\p\e\y\a\h\y\8\a\r\p\5\e\k\9\b\a\0\s\o\q\w\6\g\j\m\h\3\t\z\k\h\y\i\l\q\f\j\3\5\0\v\e\4\2\j\3\7\i\k\y\j\g\1\x\5\f\k\0\u\a\3\4\h\r\d\u\0\r\d\d\u\y\6\r\5\4\n\x\8\g\4\8\w\f\w\z\b\w\t\e\s\a\z\f\q\f\0\2\r\n\w\6\2\z\3\8\1\y\m\6\b\u\d\b\0\p\c\y\z\s\a\7\m\6\g\l\d\7\l\m\5\m\g\g\z\6\y\i\s\l\m\u\h\u\e\4\u\f\2\e\r\e\7\l\u\7\j\w\e\u\6\i\b\y\r\s\2\l\a\g\t\1\u\n\h\l\w\0\k\p\f\q\t\f\x\k\e\c\m\r\6\e\r\a\c\3\v\s\l\7\0\k\d\9\r\z\w\u\s\b\m\1\z\i\i\l\d\x\b\2\5\y\2\7\q\k\b\u\t\x\e\o\a ]] 00:05:32.214 00:05:32.214 real 0m0.969s 00:05:32.214 user 0m0.631s 00:05:32.214 sys 0m0.385s 00:05:32.214 ************************************ 00:05:32.214 END TEST dd_rw_offset 00:05:32.214 ************************************ 00:05:32.214 19:39:27 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.214 19:39:27 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:32.214 19:39:27 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:05:32.214 19:39:27 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:05:32.214 19:39:27 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:32.214 19:39:27 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:32.214 19:39:27 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:05:32.214 19:39:27 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:32.214 19:39:27 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:05:32.214 19:39:27 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:32.214 19:39:27 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:05:32.214 19:39:27 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:32.214 19:39:27 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:32.214 { 00:05:32.214 "subsystems": [ 00:05:32.214 { 00:05:32.214 "subsystem": "bdev", 00:05:32.214 "config": [ 00:05:32.214 { 00:05:32.214 "params": { 00:05:32.214 "trtype": "pcie", 00:05:32.214 "traddr": "0000:00:10.0", 00:05:32.214 "name": "Nvme0" 00:05:32.214 }, 00:05:32.214 "method": "bdev_nvme_attach_controller" 00:05:32.214 }, 00:05:32.214 { 00:05:32.214 "method": "bdev_wait_for_examine" 00:05:32.214 } 00:05:32.214 ] 00:05:32.214 } 00:05:32.214 ] 00:05:32.214 } 00:05:32.214 [2024-11-26 19:39:27.303913] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:32.214 [2024-11-26 19:39:27.304000] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59234 ] 00:05:32.214 [2024-11-26 19:39:27.442188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.471 [2024-11-26 19:39:27.485545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.471 [2024-11-26 19:39:27.521967] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:32.471  [2024-11-26T19:39:27.976Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:32.729 00:05:32.729 19:39:27 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:32.729 ************************************ 00:05:32.729 END TEST spdk_dd_basic_rw 00:05:32.729 ************************************ 00:05:32.729 00:05:32.729 real 0m13.167s 00:05:32.729 user 0m9.099s 00:05:32.729 sys 0m4.278s 00:05:32.729 19:39:27 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.729 19:39:27 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:32.729 19:39:27 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:05:32.729 19:39:27 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.729 19:39:27 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.729 19:39:27 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:05:32.729 ************************************ 00:05:32.729 START TEST spdk_dd_posix 00:05:32.729 ************************************ 00:05:32.729 19:39:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:05:32.729 * Looking for test storage... 00:05:32.729 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:32.729 19:39:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:32.729 19:39:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lcov --version 00:05:32.729 19:39:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:32.729 19:39:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:32.729 19:39:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:32.729 19:39:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:32.729 19:39:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:32.729 19:39:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:05:32.729 19:39:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:05:32.729 19:39:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:05:32.729 19:39:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:05:32.729 19:39:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:05:32.729 19:39:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:05:32.729 19:39:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:05:32.729 19:39:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:32.729 19:39:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:05:32.729 19:39:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:05:32.729 19:39:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:32.729 19:39:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:32.729 19:39:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:05:32.729 19:39:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:05:32.729 19:39:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:32.729 19:39:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:05:32.729 19:39:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:05:32.729 19:39:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:05:32.729 19:39:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:05:32.729 19:39:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:32.729 19:39:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:05:32.729 19:39:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:05:32.729 19:39:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:32.729 19:39:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:32.729 19:39:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:05:32.729 19:39:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:32.729 19:39:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:32.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.729 --rc genhtml_branch_coverage=1 00:05:32.729 --rc genhtml_function_coverage=1 00:05:32.729 --rc genhtml_legend=1 00:05:32.729 --rc geninfo_all_blocks=1 00:05:32.729 --rc geninfo_unexecuted_blocks=1 00:05:32.729 00:05:32.729 ' 00:05:32.729 19:39:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:32.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.729 --rc genhtml_branch_coverage=1 00:05:32.729 --rc genhtml_function_coverage=1 00:05:32.729 --rc genhtml_legend=1 00:05:32.729 --rc geninfo_all_blocks=1 00:05:32.729 --rc geninfo_unexecuted_blocks=1 00:05:32.729 00:05:32.729 ' 00:05:32.729 19:39:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:32.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.729 --rc genhtml_branch_coverage=1 00:05:32.729 --rc genhtml_function_coverage=1 00:05:32.729 --rc genhtml_legend=1 00:05:32.729 --rc geninfo_all_blocks=1 00:05:32.729 --rc geninfo_unexecuted_blocks=1 00:05:32.729 00:05:32.729 ' 00:05:32.729 19:39:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:32.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.729 --rc genhtml_branch_coverage=1 00:05:32.729 --rc genhtml_function_coverage=1 00:05:32.729 --rc genhtml_legend=1 00:05:32.729 --rc geninfo_all_blocks=1 00:05:32.729 --rc geninfo_unexecuted_blocks=1 00:05:32.729 00:05:32.729 ' 00:05:32.729 19:39:27 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:32.729 19:39:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:05:32.729 19:39:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:32.729 19:39:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:32.729 19:39:27 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:32.729 19:39:27 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.729 19:39:27 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.730 19:39:27 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.730 19:39:27 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:05:32.730 19:39:27 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.730 19:39:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:05:32.730 19:39:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:05:32.730 19:39:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:05:32.730 19:39:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:05:32.730 19:39:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:32.730 19:39:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:32.730 19:39:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:05:32.730 19:39:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:05:32.730 * First test run, liburing in use 00:05:32.730 19:39:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:05:32.730 19:39:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.730 19:39:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.730 19:39:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:32.730 ************************************ 00:05:32.730 START TEST dd_flag_append 00:05:32.730 ************************************ 00:05:32.730 19:39:27 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1129 -- # append 00:05:32.730 19:39:27 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:05:32.730 19:39:27 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:05:32.730 19:39:27 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:05:32.730 19:39:27 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:05:32.730 19:39:27 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:05:32.730 19:39:27 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=g7lr356t9fe5b186jphf3bs2ta79ihfw 00:05:32.730 19:39:27 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:05:32.730 19:39:27 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:05:32.730 19:39:27 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:05:32.730 19:39:27 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=55de344jcm9rlsnitcw2epqgxopmgykk 00:05:32.730 19:39:27 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s g7lr356t9fe5b186jphf3bs2ta79ihfw 00:05:32.730 19:39:27 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s 55de344jcm9rlsnitcw2epqgxopmgykk 00:05:32.730 19:39:27 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:05:32.987 [2024-11-26 19:39:27.999841] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:32.987 [2024-11-26 19:39:28.000052] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59306 ] 00:05:32.987 [2024-11-26 19:39:28.137463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.987 [2024-11-26 19:39:28.175335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.987 [2024-11-26 19:39:28.209569] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:33.244  [2024-11-26T19:39:28.491Z] Copying: 32/32 [B] (average 31 kBps) 00:05:33.244 00:05:33.244 19:39:28 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ 55de344jcm9rlsnitcw2epqgxopmgykkg7lr356t9fe5b186jphf3bs2ta79ihfw == \5\5\d\e\3\4\4\j\c\m\9\r\l\s\n\i\t\c\w\2\e\p\q\g\x\o\p\m\g\y\k\k\g\7\l\r\3\5\6\t\9\f\e\5\b\1\8\6\j\p\h\f\3\b\s\2\t\a\7\9\i\h\f\w ]] 00:05:33.244 00:05:33.244 real 0m0.375s 00:05:33.244 user 0m0.185s 00:05:33.244 sys 0m0.152s 00:05:33.244 19:39:28 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.244 19:39:28 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:05:33.244 ************************************ 00:05:33.244 END TEST dd_flag_append 00:05:33.244 ************************************ 00:05:33.244 19:39:28 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:05:33.244 19:39:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.244 19:39:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.244 19:39:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:33.244 ************************************ 00:05:33.244 START TEST dd_flag_directory 00:05:33.244 ************************************ 00:05:33.244 19:39:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1129 -- # directory 00:05:33.244 19:39:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:33.244 19:39:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:05:33.244 19:39:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:33.244 19:39:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:33.244 19:39:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:33.244 19:39:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:33.244 19:39:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:33.244 19:39:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:33.244 19:39:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:33.244 19:39:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:33.244 19:39:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:33.244 19:39:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:33.244 [2024-11-26 19:39:28.414065] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:33.244 [2024-11-26 19:39:28.414264] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59329 ] 00:05:33.502 [2024-11-26 19:39:28.555184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.502 [2024-11-26 19:39:28.592127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.502 [2024-11-26 19:39:28.624902] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:33.502 [2024-11-26 19:39:28.649755] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:33.502 [2024-11-26 19:39:28.649809] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:33.502 [2024-11-26 19:39:28.649829] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:33.502 [2024-11-26 19:39:28.710167] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:33.759 19:39:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:05:33.759 19:39:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:33.759 19:39:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:05:33.759 19:39:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:05:33.759 19:39:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:05:33.759 19:39:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:33.759 19:39:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:05:33.759 19:39:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:05:33.759 19:39:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:05:33.759 19:39:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:33.759 19:39:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:33.759 19:39:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:33.759 19:39:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:33.759 19:39:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:33.759 19:39:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:33.759 19:39:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:33.759 19:39:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:33.759 19:39:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:05:33.759 [2024-11-26 19:39:28.797726] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:33.759 [2024-11-26 19:39:28.797947] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59338 ] 00:05:33.759 [2024-11-26 19:39:28.937000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.759 [2024-11-26 19:39:28.977268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.016 [2024-11-26 19:39:29.012561] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:34.016 [2024-11-26 19:39:29.039342] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:34.016 [2024-11-26 19:39:29.039382] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:34.016 [2024-11-26 19:39:29.039393] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:34.016 [2024-11-26 19:39:29.103821] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:34.016 19:39:29 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:05:34.016 19:39:29 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:34.016 19:39:29 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:05:34.016 19:39:29 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:05:34.016 19:39:29 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:05:34.016 19:39:29 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:34.016 00:05:34.016 real 0m0.773s 00:05:34.016 user 0m0.390s 00:05:34.016 sys 0m0.174s 00:05:34.016 19:39:29 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.016 ************************************ 00:05:34.016 END TEST dd_flag_directory 00:05:34.016 ************************************ 00:05:34.016 19:39:29 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:05:34.016 19:39:29 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:05:34.017 19:39:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.017 19:39:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.017 19:39:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:34.017 ************************************ 00:05:34.017 START TEST dd_flag_nofollow 00:05:34.017 ************************************ 00:05:34.017 19:39:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1129 -- # nofollow 00:05:34.017 19:39:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:05:34.017 19:39:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:05:34.017 19:39:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:05:34.017 19:39:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:05:34.017 19:39:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:34.017 19:39:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:05:34.017 19:39:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:34.017 19:39:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:34.017 19:39:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:34.017 19:39:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:34.017 19:39:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:34.017 19:39:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:34.017 19:39:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:34.017 19:39:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:34.017 19:39:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:34.017 19:39:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:34.017 [2024-11-26 19:39:29.234512] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:34.017 [2024-11-26 19:39:29.234607] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59367 ] 00:05:34.274 [2024-11-26 19:39:29.372834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.274 [2024-11-26 19:39:29.411107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.274 [2024-11-26 19:39:29.445448] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:34.274 [2024-11-26 19:39:29.472663] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:05:34.274 [2024-11-26 19:39:29.472709] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:05:34.274 [2024-11-26 19:39:29.472721] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:34.531 [2024-11-26 19:39:29.535253] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:34.531 19:39:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:05:34.531 19:39:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:34.531 19:39:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:05:34.531 19:39:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:05:34.531 19:39:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:05:34.531 19:39:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:34.531 19:39:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:05:34.531 19:39:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:05:34.531 19:39:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:05:34.531 19:39:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:34.531 19:39:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:34.531 19:39:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:34.531 19:39:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:34.531 19:39:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:34.531 19:39:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:34.531 19:39:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:34.531 19:39:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:34.531 19:39:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:05:34.531 [2024-11-26 19:39:29.615019] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:34.531 [2024-11-26 19:39:29.615087] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59376 ] 00:05:34.531 [2024-11-26 19:39:29.751882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.789 [2024-11-26 19:39:29.790582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.789 [2024-11-26 19:39:29.824094] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:34.789 [2024-11-26 19:39:29.850450] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:05:34.789 [2024-11-26 19:39:29.850496] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:05:34.789 [2024-11-26 19:39:29.850507] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:34.789 [2024-11-26 19:39:29.915286] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:34.789 19:39:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:05:34.789 19:39:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:34.789 19:39:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:05:34.789 19:39:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:05:34.789 19:39:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:05:34.789 19:39:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:34.789 19:39:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:05:34.789 19:39:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:05:34.789 19:39:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:05:34.789 19:39:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:34.789 [2024-11-26 19:39:30.003422] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:34.789 [2024-11-26 19:39:30.003512] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59384 ] 00:05:35.047 [2024-11-26 19:39:30.146507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.047 [2024-11-26 19:39:30.183491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.047 [2024-11-26 19:39:30.215490] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:35.047  [2024-11-26T19:39:30.551Z] Copying: 512/512 [B] (average 500 kBps) 00:05:35.304 00:05:35.304 ************************************ 00:05:35.304 END TEST dd_flag_nofollow 00:05:35.304 ************************************ 00:05:35.304 19:39:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ 7vqlrisksvqs8izts60lnx64ts1qk3msj8vmy7msxztu5qzo752m3pbcajdyaowksocmuiwbxp4mv7ahezeuir6t5zqpdpg6la5kyhwhryuojvmwizc3l2353ahfq38osog6kqi8o4ge3idcw0l0txe8fb6w6bcruagcr3wg9s78xhcj7fhdoo2zkcmcs6bgbnmx7c28xnzvo2zmor51ndeyj8fofqfe3ddh9ryudviq84kfeeufx173vj7pa7c1kqghda1e79zsimt07b7uy2rm5u50zquzg9ptclypey3izyhurh3at50l0ggcpmhvvcoqa7q90dsvg6eumy0i4y8n4fw83f3jqs3iwp8dhlcinxh78z3g92bw0zezl51lewvf5j4akxvf30zlp22wkjnk9vu5re7qlc48t6ea3b5g0etk23r6bg4t3oybzujvwynbd5vvu0yes8w6kcfnfz539drcoma782yybanzd3isev8vyn3o8uhtiwpf324k == \7\v\q\l\r\i\s\k\s\v\q\s\8\i\z\t\s\6\0\l\n\x\6\4\t\s\1\q\k\3\m\s\j\8\v\m\y\7\m\s\x\z\t\u\5\q\z\o\7\5\2\m\3\p\b\c\a\j\d\y\a\o\w\k\s\o\c\m\u\i\w\b\x\p\4\m\v\7\a\h\e\z\e\u\i\r\6\t\5\z\q\p\d\p\g\6\l\a\5\k\y\h\w\h\r\y\u\o\j\v\m\w\i\z\c\3\l\2\3\5\3\a\h\f\q\3\8\o\s\o\g\6\k\q\i\8\o\4\g\e\3\i\d\c\w\0\l\0\t\x\e\8\f\b\6\w\6\b\c\r\u\a\g\c\r\3\w\g\9\s\7\8\x\h\c\j\7\f\h\d\o\o\2\z\k\c\m\c\s\6\b\g\b\n\m\x\7\c\2\8\x\n\z\v\o\2\z\m\o\r\5\1\n\d\e\y\j\8\f\o\f\q\f\e\3\d\d\h\9\r\y\u\d\v\i\q\8\4\k\f\e\e\u\f\x\1\7\3\v\j\7\p\a\7\c\1\k\q\g\h\d\a\1\e\7\9\z\s\i\m\t\0\7\b\7\u\y\2\r\m\5\u\5\0\z\q\u\z\g\9\p\t\c\l\y\p\e\y\3\i\z\y\h\u\r\h\3\a\t\5\0\l\0\g\g\c\p\m\h\v\v\c\o\q\a\7\q\9\0\d\s\v\g\6\e\u\m\y\0\i\4\y\8\n\4\f\w\8\3\f\3\j\q\s\3\i\w\p\8\d\h\l\c\i\n\x\h\7\8\z\3\g\9\2\b\w\0\z\e\z\l\5\1\l\e\w\v\f\5\j\4\a\k\x\v\f\3\0\z\l\p\2\2\w\k\j\n\k\9\v\u\5\r\e\7\q\l\c\4\8\t\6\e\a\3\b\5\g\0\e\t\k\2\3\r\6\b\g\4\t\3\o\y\b\z\u\j\v\w\y\n\b\d\5\v\v\u\0\y\e\s\8\w\6\k\c\f\n\f\z\5\3\9\d\r\c\o\m\a\7\8\2\y\y\b\a\n\z\d\3\i\s\e\v\8\v\y\n\3\o\8\u\h\t\i\w\p\f\3\2\4\k ]] 00:05:35.304 00:05:35.304 real 0m1.168s 00:05:35.304 user 0m0.591s 00:05:35.304 sys 0m0.344s 00:05:35.304 19:39:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.304 19:39:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:05:35.304 19:39:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:05:35.304 19:39:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.304 19:39:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.304 19:39:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:35.304 ************************************ 00:05:35.304 START TEST dd_flag_noatime 00:05:35.304 ************************************ 00:05:35.304 19:39:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1129 -- # noatime 00:05:35.304 19:39:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:05:35.304 19:39:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:05:35.304 19:39:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:05:35.304 19:39:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:05:35.304 19:39:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:05:35.304 19:39:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:35.304 19:39:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1732649970 00:05:35.304 19:39:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:35.304 19:39:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1732649970 00:05:35.304 19:39:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:05:36.246 19:39:31 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:36.246 [2024-11-26 19:39:31.446833] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:36.246 [2024-11-26 19:39:31.446924] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59421 ] 00:05:36.518 [2024-11-26 19:39:31.590199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.518 [2024-11-26 19:39:31.626119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.518 [2024-11-26 19:39:31.656425] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:36.518  [2024-11-26T19:39:32.023Z] Copying: 512/512 [B] (average 500 kBps) 00:05:36.776 00:05:36.776 19:39:31 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:36.776 19:39:31 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1732649970 )) 00:05:36.776 19:39:31 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:36.776 19:39:31 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1732649970 )) 00:05:36.776 19:39:31 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:36.776 [2024-11-26 19:39:31.827631] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:36.776 [2024-11-26 19:39:31.827898] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59434 ] 00:05:36.776 [2024-11-26 19:39:31.977338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.776 [2024-11-26 19:39:32.009172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.034 [2024-11-26 19:39:32.037750] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:37.034  [2024-11-26T19:39:32.281Z] Copying: 512/512 [B] (average 500 kBps) 00:05:37.034 00:05:37.034 19:39:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:37.034 ************************************ 00:05:37.034 END TEST dd_flag_noatime 00:05:37.034 ************************************ 00:05:37.034 19:39:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1732649972 )) 00:05:37.034 00:05:37.034 real 0m1.766s 00:05:37.034 user 0m0.371s 00:05:37.034 sys 0m0.320s 00:05:37.034 19:39:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.034 19:39:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:05:37.034 19:39:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:05:37.034 19:39:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.034 19:39:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.034 19:39:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:37.034 ************************************ 00:05:37.034 START TEST dd_flags_misc 00:05:37.034 ************************************ 00:05:37.034 19:39:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1129 -- # io 00:05:37.034 19:39:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:05:37.034 19:39:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:05:37.034 19:39:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:05:37.034 19:39:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:05:37.034 19:39:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:05:37.034 19:39:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:05:37.034 19:39:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:05:37.034 19:39:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:37.034 19:39:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:05:37.034 [2024-11-26 19:39:32.225019] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:37.034 [2024-11-26 19:39:32.225082] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59463 ] 00:05:37.292 [2024-11-26 19:39:32.363486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.292 [2024-11-26 19:39:32.395402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.292 [2024-11-26 19:39:32.424548] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:37.292  [2024-11-26T19:39:32.796Z] Copying: 512/512 [B] (average 500 kBps) 00:05:37.549 00:05:37.549 19:39:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ure4itro1h1pbctk8yami5mevrgw6ohuqs8kq9ed586pt2ii91y36vs9ohlhoijgplgmm0eptebq0l9hn0y7srht372fifet6fom98t899kfxaxkddpo31m1nhwhb6twzyw4v2qjqh8foo8mh289tk74ocaj6difvl2fkbdrcfxbobdgixnh73zc5y02g5fqciyce17yegu08mbphgy1oxnm194kh01l3lu0iv6r5cc9j136e5mq75ro5o28rdwtuuebum93ue67p2ehxphyl2kojozopuhfvsa91dfzj5kc46ayt6jamqpwkkos8gw90ck1kmczq7ldi750apaxf5qxdpm5dcly8yw8diqvd70y0k95a44e8bxfg3klhr8xogahihuoh4p59qdoflnx9djgb3gr4i3ez7nad90rvdgenpjlx4d6pdaizbzfr7n4s0lqnthag2ke8d644bcivuftp4vghzctlxamrupr5qu6rzl3imlo640ljfxd7rsg == \u\r\e\4\i\t\r\o\1\h\1\p\b\c\t\k\8\y\a\m\i\5\m\e\v\r\g\w\6\o\h\u\q\s\8\k\q\9\e\d\5\8\6\p\t\2\i\i\9\1\y\3\6\v\s\9\o\h\l\h\o\i\j\g\p\l\g\m\m\0\e\p\t\e\b\q\0\l\9\h\n\0\y\7\s\r\h\t\3\7\2\f\i\f\e\t\6\f\o\m\9\8\t\8\9\9\k\f\x\a\x\k\d\d\p\o\3\1\m\1\n\h\w\h\b\6\t\w\z\y\w\4\v\2\q\j\q\h\8\f\o\o\8\m\h\2\8\9\t\k\7\4\o\c\a\j\6\d\i\f\v\l\2\f\k\b\d\r\c\f\x\b\o\b\d\g\i\x\n\h\7\3\z\c\5\y\0\2\g\5\f\q\c\i\y\c\e\1\7\y\e\g\u\0\8\m\b\p\h\g\y\1\o\x\n\m\1\9\4\k\h\0\1\l\3\l\u\0\i\v\6\r\5\c\c\9\j\1\3\6\e\5\m\q\7\5\r\o\5\o\2\8\r\d\w\t\u\u\e\b\u\m\9\3\u\e\6\7\p\2\e\h\x\p\h\y\l\2\k\o\j\o\z\o\p\u\h\f\v\s\a\9\1\d\f\z\j\5\k\c\4\6\a\y\t\6\j\a\m\q\p\w\k\k\o\s\8\g\w\9\0\c\k\1\k\m\c\z\q\7\l\d\i\7\5\0\a\p\a\x\f\5\q\x\d\p\m\5\d\c\l\y\8\y\w\8\d\i\q\v\d\7\0\y\0\k\9\5\a\4\4\e\8\b\x\f\g\3\k\l\h\r\8\x\o\g\a\h\i\h\u\o\h\4\p\5\9\q\d\o\f\l\n\x\9\d\j\g\b\3\g\r\4\i\3\e\z\7\n\a\d\9\0\r\v\d\g\e\n\p\j\l\x\4\d\6\p\d\a\i\z\b\z\f\r\7\n\4\s\0\l\q\n\t\h\a\g\2\k\e\8\d\6\4\4\b\c\i\v\u\f\t\p\4\v\g\h\z\c\t\l\x\a\m\r\u\p\r\5\q\u\6\r\z\l\3\i\m\l\o\6\4\0\l\j\f\x\d\7\r\s\g ]] 00:05:37.549 19:39:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:37.549 19:39:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:05:37.549 [2024-11-26 19:39:32.582117] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:37.549 [2024-11-26 19:39:32.582789] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59471 ] 00:05:37.549 [2024-11-26 19:39:32.727334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.549 [2024-11-26 19:39:32.760289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.549 [2024-11-26 19:39:32.790848] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:37.807  [2024-11-26T19:39:33.054Z] Copying: 512/512 [B] (average 500 kBps) 00:05:37.807 00:05:37.807 19:39:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ure4itro1h1pbctk8yami5mevrgw6ohuqs8kq9ed586pt2ii91y36vs9ohlhoijgplgmm0eptebq0l9hn0y7srht372fifet6fom98t899kfxaxkddpo31m1nhwhb6twzyw4v2qjqh8foo8mh289tk74ocaj6difvl2fkbdrcfxbobdgixnh73zc5y02g5fqciyce17yegu08mbphgy1oxnm194kh01l3lu0iv6r5cc9j136e5mq75ro5o28rdwtuuebum93ue67p2ehxphyl2kojozopuhfvsa91dfzj5kc46ayt6jamqpwkkos8gw90ck1kmczq7ldi750apaxf5qxdpm5dcly8yw8diqvd70y0k95a44e8bxfg3klhr8xogahihuoh4p59qdoflnx9djgb3gr4i3ez7nad90rvdgenpjlx4d6pdaizbzfr7n4s0lqnthag2ke8d644bcivuftp4vghzctlxamrupr5qu6rzl3imlo640ljfxd7rsg == \u\r\e\4\i\t\r\o\1\h\1\p\b\c\t\k\8\y\a\m\i\5\m\e\v\r\g\w\6\o\h\u\q\s\8\k\q\9\e\d\5\8\6\p\t\2\i\i\9\1\y\3\6\v\s\9\o\h\l\h\o\i\j\g\p\l\g\m\m\0\e\p\t\e\b\q\0\l\9\h\n\0\y\7\s\r\h\t\3\7\2\f\i\f\e\t\6\f\o\m\9\8\t\8\9\9\k\f\x\a\x\k\d\d\p\o\3\1\m\1\n\h\w\h\b\6\t\w\z\y\w\4\v\2\q\j\q\h\8\f\o\o\8\m\h\2\8\9\t\k\7\4\o\c\a\j\6\d\i\f\v\l\2\f\k\b\d\r\c\f\x\b\o\b\d\g\i\x\n\h\7\3\z\c\5\y\0\2\g\5\f\q\c\i\y\c\e\1\7\y\e\g\u\0\8\m\b\p\h\g\y\1\o\x\n\m\1\9\4\k\h\0\1\l\3\l\u\0\i\v\6\r\5\c\c\9\j\1\3\6\e\5\m\q\7\5\r\o\5\o\2\8\r\d\w\t\u\u\e\b\u\m\9\3\u\e\6\7\p\2\e\h\x\p\h\y\l\2\k\o\j\o\z\o\p\u\h\f\v\s\a\9\1\d\f\z\j\5\k\c\4\6\a\y\t\6\j\a\m\q\p\w\k\k\o\s\8\g\w\9\0\c\k\1\k\m\c\z\q\7\l\d\i\7\5\0\a\p\a\x\f\5\q\x\d\p\m\5\d\c\l\y\8\y\w\8\d\i\q\v\d\7\0\y\0\k\9\5\a\4\4\e\8\b\x\f\g\3\k\l\h\r\8\x\o\g\a\h\i\h\u\o\h\4\p\5\9\q\d\o\f\l\n\x\9\d\j\g\b\3\g\r\4\i\3\e\z\7\n\a\d\9\0\r\v\d\g\e\n\p\j\l\x\4\d\6\p\d\a\i\z\b\z\f\r\7\n\4\s\0\l\q\n\t\h\a\g\2\k\e\8\d\6\4\4\b\c\i\v\u\f\t\p\4\v\g\h\z\c\t\l\x\a\m\r\u\p\r\5\q\u\6\r\z\l\3\i\m\l\o\6\4\0\l\j\f\x\d\7\r\s\g ]] 00:05:37.807 19:39:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:37.807 19:39:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:05:37.807 [2024-11-26 19:39:32.944498] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:37.807 [2024-11-26 19:39:32.944698] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59476 ] 00:05:38.064 [2024-11-26 19:39:33.083791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.064 [2024-11-26 19:39:33.118108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.064 [2024-11-26 19:39:33.148535] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:38.064  [2024-11-26T19:39:33.311Z] Copying: 512/512 [B] (average 250 kBps) 00:05:38.064 00:05:38.064 19:39:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ure4itro1h1pbctk8yami5mevrgw6ohuqs8kq9ed586pt2ii91y36vs9ohlhoijgplgmm0eptebq0l9hn0y7srht372fifet6fom98t899kfxaxkddpo31m1nhwhb6twzyw4v2qjqh8foo8mh289tk74ocaj6difvl2fkbdrcfxbobdgixnh73zc5y02g5fqciyce17yegu08mbphgy1oxnm194kh01l3lu0iv6r5cc9j136e5mq75ro5o28rdwtuuebum93ue67p2ehxphyl2kojozopuhfvsa91dfzj5kc46ayt6jamqpwkkos8gw90ck1kmczq7ldi750apaxf5qxdpm5dcly8yw8diqvd70y0k95a44e8bxfg3klhr8xogahihuoh4p59qdoflnx9djgb3gr4i3ez7nad90rvdgenpjlx4d6pdaizbzfr7n4s0lqnthag2ke8d644bcivuftp4vghzctlxamrupr5qu6rzl3imlo640ljfxd7rsg == \u\r\e\4\i\t\r\o\1\h\1\p\b\c\t\k\8\y\a\m\i\5\m\e\v\r\g\w\6\o\h\u\q\s\8\k\q\9\e\d\5\8\6\p\t\2\i\i\9\1\y\3\6\v\s\9\o\h\l\h\o\i\j\g\p\l\g\m\m\0\e\p\t\e\b\q\0\l\9\h\n\0\y\7\s\r\h\t\3\7\2\f\i\f\e\t\6\f\o\m\9\8\t\8\9\9\k\f\x\a\x\k\d\d\p\o\3\1\m\1\n\h\w\h\b\6\t\w\z\y\w\4\v\2\q\j\q\h\8\f\o\o\8\m\h\2\8\9\t\k\7\4\o\c\a\j\6\d\i\f\v\l\2\f\k\b\d\r\c\f\x\b\o\b\d\g\i\x\n\h\7\3\z\c\5\y\0\2\g\5\f\q\c\i\y\c\e\1\7\y\e\g\u\0\8\m\b\p\h\g\y\1\o\x\n\m\1\9\4\k\h\0\1\l\3\l\u\0\i\v\6\r\5\c\c\9\j\1\3\6\e\5\m\q\7\5\r\o\5\o\2\8\r\d\w\t\u\u\e\b\u\m\9\3\u\e\6\7\p\2\e\h\x\p\h\y\l\2\k\o\j\o\z\o\p\u\h\f\v\s\a\9\1\d\f\z\j\5\k\c\4\6\a\y\t\6\j\a\m\q\p\w\k\k\o\s\8\g\w\9\0\c\k\1\k\m\c\z\q\7\l\d\i\7\5\0\a\p\a\x\f\5\q\x\d\p\m\5\d\c\l\y\8\y\w\8\d\i\q\v\d\7\0\y\0\k\9\5\a\4\4\e\8\b\x\f\g\3\k\l\h\r\8\x\o\g\a\h\i\h\u\o\h\4\p\5\9\q\d\o\f\l\n\x\9\d\j\g\b\3\g\r\4\i\3\e\z\7\n\a\d\9\0\r\v\d\g\e\n\p\j\l\x\4\d\6\p\d\a\i\z\b\z\f\r\7\n\4\s\0\l\q\n\t\h\a\g\2\k\e\8\d\6\4\4\b\c\i\v\u\f\t\p\4\v\g\h\z\c\t\l\x\a\m\r\u\p\r\5\q\u\6\r\z\l\3\i\m\l\o\6\4\0\l\j\f\x\d\7\r\s\g ]] 00:05:38.064 19:39:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:38.064 19:39:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:05:38.322 [2024-11-26 19:39:33.313341] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:38.322 [2024-11-26 19:39:33.313535] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59486 ] 00:05:38.322 [2024-11-26 19:39:33.453416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.322 [2024-11-26 19:39:33.485329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.322 [2024-11-26 19:39:33.513280] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:38.322  [2024-11-26T19:39:33.826Z] Copying: 512/512 [B] (average 166 kBps) 00:05:38.579 00:05:38.580 19:39:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ure4itro1h1pbctk8yami5mevrgw6ohuqs8kq9ed586pt2ii91y36vs9ohlhoijgplgmm0eptebq0l9hn0y7srht372fifet6fom98t899kfxaxkddpo31m1nhwhb6twzyw4v2qjqh8foo8mh289tk74ocaj6difvl2fkbdrcfxbobdgixnh73zc5y02g5fqciyce17yegu08mbphgy1oxnm194kh01l3lu0iv6r5cc9j136e5mq75ro5o28rdwtuuebum93ue67p2ehxphyl2kojozopuhfvsa91dfzj5kc46ayt6jamqpwkkos8gw90ck1kmczq7ldi750apaxf5qxdpm5dcly8yw8diqvd70y0k95a44e8bxfg3klhr8xogahihuoh4p59qdoflnx9djgb3gr4i3ez7nad90rvdgenpjlx4d6pdaizbzfr7n4s0lqnthag2ke8d644bcivuftp4vghzctlxamrupr5qu6rzl3imlo640ljfxd7rsg == \u\r\e\4\i\t\r\o\1\h\1\p\b\c\t\k\8\y\a\m\i\5\m\e\v\r\g\w\6\o\h\u\q\s\8\k\q\9\e\d\5\8\6\p\t\2\i\i\9\1\y\3\6\v\s\9\o\h\l\h\o\i\j\g\p\l\g\m\m\0\e\p\t\e\b\q\0\l\9\h\n\0\y\7\s\r\h\t\3\7\2\f\i\f\e\t\6\f\o\m\9\8\t\8\9\9\k\f\x\a\x\k\d\d\p\o\3\1\m\1\n\h\w\h\b\6\t\w\z\y\w\4\v\2\q\j\q\h\8\f\o\o\8\m\h\2\8\9\t\k\7\4\o\c\a\j\6\d\i\f\v\l\2\f\k\b\d\r\c\f\x\b\o\b\d\g\i\x\n\h\7\3\z\c\5\y\0\2\g\5\f\q\c\i\y\c\e\1\7\y\e\g\u\0\8\m\b\p\h\g\y\1\o\x\n\m\1\9\4\k\h\0\1\l\3\l\u\0\i\v\6\r\5\c\c\9\j\1\3\6\e\5\m\q\7\5\r\o\5\o\2\8\r\d\w\t\u\u\e\b\u\m\9\3\u\e\6\7\p\2\e\h\x\p\h\y\l\2\k\o\j\o\z\o\p\u\h\f\v\s\a\9\1\d\f\z\j\5\k\c\4\6\a\y\t\6\j\a\m\q\p\w\k\k\o\s\8\g\w\9\0\c\k\1\k\m\c\z\q\7\l\d\i\7\5\0\a\p\a\x\f\5\q\x\d\p\m\5\d\c\l\y\8\y\w\8\d\i\q\v\d\7\0\y\0\k\9\5\a\4\4\e\8\b\x\f\g\3\k\l\h\r\8\x\o\g\a\h\i\h\u\o\h\4\p\5\9\q\d\o\f\l\n\x\9\d\j\g\b\3\g\r\4\i\3\e\z\7\n\a\d\9\0\r\v\d\g\e\n\p\j\l\x\4\d\6\p\d\a\i\z\b\z\f\r\7\n\4\s\0\l\q\n\t\h\a\g\2\k\e\8\d\6\4\4\b\c\i\v\u\f\t\p\4\v\g\h\z\c\t\l\x\a\m\r\u\p\r\5\q\u\6\r\z\l\3\i\m\l\o\6\4\0\l\j\f\x\d\7\r\s\g ]] 00:05:38.580 19:39:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:05:38.580 19:39:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:05:38.580 19:39:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:05:38.580 19:39:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:05:38.580 19:39:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:38.580 19:39:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:05:38.580 [2024-11-26 19:39:33.678102] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:38.580 [2024-11-26 19:39:33.678169] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59495 ] 00:05:38.580 [2024-11-26 19:39:33.814664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.839 [2024-11-26 19:39:33.846814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.839 [2024-11-26 19:39:33.876507] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:38.839  [2024-11-26T19:39:34.086Z] Copying: 512/512 [B] (average 500 kBps) 00:05:38.839 00:05:38.839 19:39:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ tfl7vb2ajmekm9u9uvi6tsuj6iuyfzg3rxbwqh3ixwud4cn5wjc0mh3siqr4mi4gzggd0ncrhel4d9vvo693mosavde0vfmr9tox0n27074ip4puumv9zj6b41vhqt1lckqm6u18jcl4f2uemiff906f43k9d8pdzlq4f3jr2i0y3cbmannzw0zsxtpbsly0uqdc3am5tsf2vjadj5nlua067qneh1ahmbsvrga5kif9f72n7py6yhoz8ummmxmx0y9i30zhsuorxthyo2569vusa1wy4m2vbqp6guvhstxojsnica4fyk3rkrpy4hz9kt9lucmkf39apsz7olcgpuq2e858xqneso3xfgtbq6s20e1rpjjc3heoo5qpywgjp4r4shwgy0idjv67plctmyvgi4gukn6sk2qsbybn2e7jj6mfrycivi1a96c3lihdolno6v7z42h61gxqpkuu8ihz177dbvelewrfeea70ft2i9qtlg9mygk9v8nbrr00 == \t\f\l\7\v\b\2\a\j\m\e\k\m\9\u\9\u\v\i\6\t\s\u\j\6\i\u\y\f\z\g\3\r\x\b\w\q\h\3\i\x\w\u\d\4\c\n\5\w\j\c\0\m\h\3\s\i\q\r\4\m\i\4\g\z\g\g\d\0\n\c\r\h\e\l\4\d\9\v\v\o\6\9\3\m\o\s\a\v\d\e\0\v\f\m\r\9\t\o\x\0\n\2\7\0\7\4\i\p\4\p\u\u\m\v\9\z\j\6\b\4\1\v\h\q\t\1\l\c\k\q\m\6\u\1\8\j\c\l\4\f\2\u\e\m\i\f\f\9\0\6\f\4\3\k\9\d\8\p\d\z\l\q\4\f\3\j\r\2\i\0\y\3\c\b\m\a\n\n\z\w\0\z\s\x\t\p\b\s\l\y\0\u\q\d\c\3\a\m\5\t\s\f\2\v\j\a\d\j\5\n\l\u\a\0\6\7\q\n\e\h\1\a\h\m\b\s\v\r\g\a\5\k\i\f\9\f\7\2\n\7\p\y\6\y\h\o\z\8\u\m\m\m\x\m\x\0\y\9\i\3\0\z\h\s\u\o\r\x\t\h\y\o\2\5\6\9\v\u\s\a\1\w\y\4\m\2\v\b\q\p\6\g\u\v\h\s\t\x\o\j\s\n\i\c\a\4\f\y\k\3\r\k\r\p\y\4\h\z\9\k\t\9\l\u\c\m\k\f\3\9\a\p\s\z\7\o\l\c\g\p\u\q\2\e\8\5\8\x\q\n\e\s\o\3\x\f\g\t\b\q\6\s\2\0\e\1\r\p\j\j\c\3\h\e\o\o\5\q\p\y\w\g\j\p\4\r\4\s\h\w\g\y\0\i\d\j\v\6\7\p\l\c\t\m\y\v\g\i\4\g\u\k\n\6\s\k\2\q\s\b\y\b\n\2\e\7\j\j\6\m\f\r\y\c\i\v\i\1\a\9\6\c\3\l\i\h\d\o\l\n\o\6\v\7\z\4\2\h\6\1\g\x\q\p\k\u\u\8\i\h\z\1\7\7\d\b\v\e\l\e\w\r\f\e\e\a\7\0\f\t\2\i\9\q\t\l\g\9\m\y\g\k\9\v\8\n\b\r\r\0\0 ]] 00:05:38.839 19:39:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:38.839 19:39:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:05:38.839 [2024-11-26 19:39:34.027904] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:38.839 [2024-11-26 19:39:34.027968] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59499 ] 00:05:39.098 [2024-11-26 19:39:34.168325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.098 [2024-11-26 19:39:34.201304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.098 [2024-11-26 19:39:34.231338] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:39.098  [2024-11-26T19:39:34.603Z] Copying: 512/512 [B] (average 500 kBps) 00:05:39.356 00:05:39.356 19:39:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ tfl7vb2ajmekm9u9uvi6tsuj6iuyfzg3rxbwqh3ixwud4cn5wjc0mh3siqr4mi4gzggd0ncrhel4d9vvo693mosavde0vfmr9tox0n27074ip4puumv9zj6b41vhqt1lckqm6u18jcl4f2uemiff906f43k9d8pdzlq4f3jr2i0y3cbmannzw0zsxtpbsly0uqdc3am5tsf2vjadj5nlua067qneh1ahmbsvrga5kif9f72n7py6yhoz8ummmxmx0y9i30zhsuorxthyo2569vusa1wy4m2vbqp6guvhstxojsnica4fyk3rkrpy4hz9kt9lucmkf39apsz7olcgpuq2e858xqneso3xfgtbq6s20e1rpjjc3heoo5qpywgjp4r4shwgy0idjv67plctmyvgi4gukn6sk2qsbybn2e7jj6mfrycivi1a96c3lihdolno6v7z42h61gxqpkuu8ihz177dbvelewrfeea70ft2i9qtlg9mygk9v8nbrr00 == \t\f\l\7\v\b\2\a\j\m\e\k\m\9\u\9\u\v\i\6\t\s\u\j\6\i\u\y\f\z\g\3\r\x\b\w\q\h\3\i\x\w\u\d\4\c\n\5\w\j\c\0\m\h\3\s\i\q\r\4\m\i\4\g\z\g\g\d\0\n\c\r\h\e\l\4\d\9\v\v\o\6\9\3\m\o\s\a\v\d\e\0\v\f\m\r\9\t\o\x\0\n\2\7\0\7\4\i\p\4\p\u\u\m\v\9\z\j\6\b\4\1\v\h\q\t\1\l\c\k\q\m\6\u\1\8\j\c\l\4\f\2\u\e\m\i\f\f\9\0\6\f\4\3\k\9\d\8\p\d\z\l\q\4\f\3\j\r\2\i\0\y\3\c\b\m\a\n\n\z\w\0\z\s\x\t\p\b\s\l\y\0\u\q\d\c\3\a\m\5\t\s\f\2\v\j\a\d\j\5\n\l\u\a\0\6\7\q\n\e\h\1\a\h\m\b\s\v\r\g\a\5\k\i\f\9\f\7\2\n\7\p\y\6\y\h\o\z\8\u\m\m\m\x\m\x\0\y\9\i\3\0\z\h\s\u\o\r\x\t\h\y\o\2\5\6\9\v\u\s\a\1\w\y\4\m\2\v\b\q\p\6\g\u\v\h\s\t\x\o\j\s\n\i\c\a\4\f\y\k\3\r\k\r\p\y\4\h\z\9\k\t\9\l\u\c\m\k\f\3\9\a\p\s\z\7\o\l\c\g\p\u\q\2\e\8\5\8\x\q\n\e\s\o\3\x\f\g\t\b\q\6\s\2\0\e\1\r\p\j\j\c\3\h\e\o\o\5\q\p\y\w\g\j\p\4\r\4\s\h\w\g\y\0\i\d\j\v\6\7\p\l\c\t\m\y\v\g\i\4\g\u\k\n\6\s\k\2\q\s\b\y\b\n\2\e\7\j\j\6\m\f\r\y\c\i\v\i\1\a\9\6\c\3\l\i\h\d\o\l\n\o\6\v\7\z\4\2\h\6\1\g\x\q\p\k\u\u\8\i\h\z\1\7\7\d\b\v\e\l\e\w\r\f\e\e\a\7\0\f\t\2\i\9\q\t\l\g\9\m\y\g\k\9\v\8\n\b\r\r\0\0 ]] 00:05:39.356 19:39:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:39.356 19:39:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:05:39.356 [2024-11-26 19:39:34.388177] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:39.356 [2024-11-26 19:39:34.388361] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59509 ] 00:05:39.356 [2024-11-26 19:39:34.523731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.356 [2024-11-26 19:39:34.554864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.356 [2024-11-26 19:39:34.582607] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:39.613  [2024-11-26T19:39:34.860Z] Copying: 512/512 [B] (average 500 kBps) 00:05:39.613 00:05:39.614 19:39:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ tfl7vb2ajmekm9u9uvi6tsuj6iuyfzg3rxbwqh3ixwud4cn5wjc0mh3siqr4mi4gzggd0ncrhel4d9vvo693mosavde0vfmr9tox0n27074ip4puumv9zj6b41vhqt1lckqm6u18jcl4f2uemiff906f43k9d8pdzlq4f3jr2i0y3cbmannzw0zsxtpbsly0uqdc3am5tsf2vjadj5nlua067qneh1ahmbsvrga5kif9f72n7py6yhoz8ummmxmx0y9i30zhsuorxthyo2569vusa1wy4m2vbqp6guvhstxojsnica4fyk3rkrpy4hz9kt9lucmkf39apsz7olcgpuq2e858xqneso3xfgtbq6s20e1rpjjc3heoo5qpywgjp4r4shwgy0idjv67plctmyvgi4gukn6sk2qsbybn2e7jj6mfrycivi1a96c3lihdolno6v7z42h61gxqpkuu8ihz177dbvelewrfeea70ft2i9qtlg9mygk9v8nbrr00 == \t\f\l\7\v\b\2\a\j\m\e\k\m\9\u\9\u\v\i\6\t\s\u\j\6\i\u\y\f\z\g\3\r\x\b\w\q\h\3\i\x\w\u\d\4\c\n\5\w\j\c\0\m\h\3\s\i\q\r\4\m\i\4\g\z\g\g\d\0\n\c\r\h\e\l\4\d\9\v\v\o\6\9\3\m\o\s\a\v\d\e\0\v\f\m\r\9\t\o\x\0\n\2\7\0\7\4\i\p\4\p\u\u\m\v\9\z\j\6\b\4\1\v\h\q\t\1\l\c\k\q\m\6\u\1\8\j\c\l\4\f\2\u\e\m\i\f\f\9\0\6\f\4\3\k\9\d\8\p\d\z\l\q\4\f\3\j\r\2\i\0\y\3\c\b\m\a\n\n\z\w\0\z\s\x\t\p\b\s\l\y\0\u\q\d\c\3\a\m\5\t\s\f\2\v\j\a\d\j\5\n\l\u\a\0\6\7\q\n\e\h\1\a\h\m\b\s\v\r\g\a\5\k\i\f\9\f\7\2\n\7\p\y\6\y\h\o\z\8\u\m\m\m\x\m\x\0\y\9\i\3\0\z\h\s\u\o\r\x\t\h\y\o\2\5\6\9\v\u\s\a\1\w\y\4\m\2\v\b\q\p\6\g\u\v\h\s\t\x\o\j\s\n\i\c\a\4\f\y\k\3\r\k\r\p\y\4\h\z\9\k\t\9\l\u\c\m\k\f\3\9\a\p\s\z\7\o\l\c\g\p\u\q\2\e\8\5\8\x\q\n\e\s\o\3\x\f\g\t\b\q\6\s\2\0\e\1\r\p\j\j\c\3\h\e\o\o\5\q\p\y\w\g\j\p\4\r\4\s\h\w\g\y\0\i\d\j\v\6\7\p\l\c\t\m\y\v\g\i\4\g\u\k\n\6\s\k\2\q\s\b\y\b\n\2\e\7\j\j\6\m\f\r\y\c\i\v\i\1\a\9\6\c\3\l\i\h\d\o\l\n\o\6\v\7\z\4\2\h\6\1\g\x\q\p\k\u\u\8\i\h\z\1\7\7\d\b\v\e\l\e\w\r\f\e\e\a\7\0\f\t\2\i\9\q\t\l\g\9\m\y\g\k\9\v\8\n\b\r\r\0\0 ]] 00:05:39.614 19:39:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:39.614 19:39:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:05:39.614 [2024-11-26 19:39:34.721262] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:39.614 [2024-11-26 19:39:34.721392] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59518 ] 00:05:39.614 [2024-11-26 19:39:34.858612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.871 [2024-11-26 19:39:34.891896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.871 [2024-11-26 19:39:34.923225] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:39.871  [2024-11-26T19:39:35.118Z] Copying: 512/512 [B] (average 250 kBps) 00:05:39.871 00:05:39.871 19:39:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ tfl7vb2ajmekm9u9uvi6tsuj6iuyfzg3rxbwqh3ixwud4cn5wjc0mh3siqr4mi4gzggd0ncrhel4d9vvo693mosavde0vfmr9tox0n27074ip4puumv9zj6b41vhqt1lckqm6u18jcl4f2uemiff906f43k9d8pdzlq4f3jr2i0y3cbmannzw0zsxtpbsly0uqdc3am5tsf2vjadj5nlua067qneh1ahmbsvrga5kif9f72n7py6yhoz8ummmxmx0y9i30zhsuorxthyo2569vusa1wy4m2vbqp6guvhstxojsnica4fyk3rkrpy4hz9kt9lucmkf39apsz7olcgpuq2e858xqneso3xfgtbq6s20e1rpjjc3heoo5qpywgjp4r4shwgy0idjv67plctmyvgi4gukn6sk2qsbybn2e7jj6mfrycivi1a96c3lihdolno6v7z42h61gxqpkuu8ihz177dbvelewrfeea70ft2i9qtlg9mygk9v8nbrr00 == \t\f\l\7\v\b\2\a\j\m\e\k\m\9\u\9\u\v\i\6\t\s\u\j\6\i\u\y\f\z\g\3\r\x\b\w\q\h\3\i\x\w\u\d\4\c\n\5\w\j\c\0\m\h\3\s\i\q\r\4\m\i\4\g\z\g\g\d\0\n\c\r\h\e\l\4\d\9\v\v\o\6\9\3\m\o\s\a\v\d\e\0\v\f\m\r\9\t\o\x\0\n\2\7\0\7\4\i\p\4\p\u\u\m\v\9\z\j\6\b\4\1\v\h\q\t\1\l\c\k\q\m\6\u\1\8\j\c\l\4\f\2\u\e\m\i\f\f\9\0\6\f\4\3\k\9\d\8\p\d\z\l\q\4\f\3\j\r\2\i\0\y\3\c\b\m\a\n\n\z\w\0\z\s\x\t\p\b\s\l\y\0\u\q\d\c\3\a\m\5\t\s\f\2\v\j\a\d\j\5\n\l\u\a\0\6\7\q\n\e\h\1\a\h\m\b\s\v\r\g\a\5\k\i\f\9\f\7\2\n\7\p\y\6\y\h\o\z\8\u\m\m\m\x\m\x\0\y\9\i\3\0\z\h\s\u\o\r\x\t\h\y\o\2\5\6\9\v\u\s\a\1\w\y\4\m\2\v\b\q\p\6\g\u\v\h\s\t\x\o\j\s\n\i\c\a\4\f\y\k\3\r\k\r\p\y\4\h\z\9\k\t\9\l\u\c\m\k\f\3\9\a\p\s\z\7\o\l\c\g\p\u\q\2\e\8\5\8\x\q\n\e\s\o\3\x\f\g\t\b\q\6\s\2\0\e\1\r\p\j\j\c\3\h\e\o\o\5\q\p\y\w\g\j\p\4\r\4\s\h\w\g\y\0\i\d\j\v\6\7\p\l\c\t\m\y\v\g\i\4\g\u\k\n\6\s\k\2\q\s\b\y\b\n\2\e\7\j\j\6\m\f\r\y\c\i\v\i\1\a\9\6\c\3\l\i\h\d\o\l\n\o\6\v\7\z\4\2\h\6\1\g\x\q\p\k\u\u\8\i\h\z\1\7\7\d\b\v\e\l\e\w\r\f\e\e\a\7\0\f\t\2\i\9\q\t\l\g\9\m\y\g\k\9\v\8\n\b\r\r\0\0 ]] 00:05:39.871 00:05:39.871 real 0m2.863s 00:05:39.871 user 0m1.400s 00:05:39.871 sys 0m1.193s 00:05:39.871 19:39:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.871 19:39:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:05:39.871 ************************************ 00:05:39.871 END TEST dd_flags_misc 00:05:39.871 ************************************ 00:05:39.871 19:39:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:05:39.871 19:39:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:05:39.871 * Second test run, disabling liburing, forcing AIO 00:05:39.871 19:39:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:05:39.871 19:39:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:05:39.871 19:39:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.871 19:39:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.871 19:39:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:39.871 ************************************ 00:05:39.871 START TEST dd_flag_append_forced_aio 00:05:39.871 ************************************ 00:05:39.871 19:39:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1129 -- # append 00:05:39.871 19:39:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:05:39.871 19:39:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:05:39.871 19:39:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:05:39.871 19:39:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:05:39.871 19:39:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:39.871 19:39:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=olgwvp0rdu95vvxd3djuc2ciiohwrx12 00:05:39.871 19:39:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:05:39.871 19:39:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:05:39.871 19:39:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:39.871 19:39:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=p2sz4iytll7wyenn4y2e7irtv7na37li 00:05:39.871 19:39:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s olgwvp0rdu95vvxd3djuc2ciiohwrx12 00:05:39.871 19:39:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s p2sz4iytll7wyenn4y2e7irtv7na37li 00:05:39.871 19:39:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:05:40.129 [2024-11-26 19:39:35.128465] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:40.129 [2024-11-26 19:39:35.128528] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59547 ] 00:05:40.129 [2024-11-26 19:39:35.257914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.129 [2024-11-26 19:39:35.291473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.129 [2024-11-26 19:39:35.322161] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:40.129  [2024-11-26T19:39:35.633Z] Copying: 32/32 [B] (average 31 kBps) 00:05:40.386 00:05:40.386 19:39:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ p2sz4iytll7wyenn4y2e7irtv7na37liolgwvp0rdu95vvxd3djuc2ciiohwrx12 == \p\2\s\z\4\i\y\t\l\l\7\w\y\e\n\n\4\y\2\e\7\i\r\t\v\7\n\a\3\7\l\i\o\l\g\w\v\p\0\r\d\u\9\5\v\v\x\d\3\d\j\u\c\2\c\i\i\o\h\w\r\x\1\2 ]] 00:05:40.386 00:05:40.386 real 0m0.380s 00:05:40.386 user 0m0.188s 00:05:40.386 sys 0m0.074s 00:05:40.386 19:39:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.386 19:39:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:40.386 ************************************ 00:05:40.386 END TEST dd_flag_append_forced_aio 00:05:40.386 ************************************ 00:05:40.386 19:39:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:05:40.386 19:39:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.386 19:39:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.386 19:39:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:40.386 ************************************ 00:05:40.386 START TEST dd_flag_directory_forced_aio 00:05:40.386 ************************************ 00:05:40.386 19:39:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1129 -- # directory 00:05:40.386 19:39:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:40.386 19:39:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:05:40.386 19:39:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:40.386 19:39:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:40.386 19:39:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.386 19:39:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:40.386 19:39:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.386 19:39:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:40.386 19:39:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.386 19:39:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:40.386 19:39:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:40.386 19:39:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:40.386 [2024-11-26 19:39:35.546502] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:40.386 [2024-11-26 19:39:35.546577] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59568 ] 00:05:40.644 [2024-11-26 19:39:35.678097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.644 [2024-11-26 19:39:35.711839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.644 [2024-11-26 19:39:35.742017] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:40.644 [2024-11-26 19:39:35.767376] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:40.644 [2024-11-26 19:39:35.767412] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:40.644 [2024-11-26 19:39:35.767421] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:40.644 [2024-11-26 19:39:35.826440] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:40.644 19:39:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:05:40.644 19:39:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:40.644 19:39:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:05:40.644 19:39:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:05:40.644 19:39:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:05:40.644 19:39:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:40.644 19:39:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:05:40.644 19:39:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:05:40.644 19:39:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:05:40.644 19:39:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:40.644 19:39:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.644 19:39:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:40.644 19:39:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.644 19:39:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:40.644 19:39:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.644 19:39:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:40.644 19:39:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:40.644 19:39:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:05:40.902 [2024-11-26 19:39:35.899469] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:40.902 [2024-11-26 19:39:35.899630] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59577 ] 00:05:40.902 [2024-11-26 19:39:36.034931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.902 [2024-11-26 19:39:36.067962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.902 [2024-11-26 19:39:36.098873] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:40.902 [2024-11-26 19:39:36.124399] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:40.902 [2024-11-26 19:39:36.124438] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:40.902 [2024-11-26 19:39:36.124447] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:41.159 [2024-11-26 19:39:36.184931] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:41.159 ************************************ 00:05:41.159 END TEST dd_flag_directory_forced_aio 00:05:41.159 ************************************ 00:05:41.159 19:39:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:05:41.159 19:39:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:41.159 19:39:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:05:41.159 19:39:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:05:41.159 19:39:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:05:41.159 19:39:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:41.159 00:05:41.159 real 0m0.715s 00:05:41.159 user 0m0.340s 00:05:41.159 sys 0m0.169s 00:05:41.159 19:39:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.159 19:39:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:41.159 19:39:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:05:41.159 19:39:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.159 19:39:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.159 19:39:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:41.159 ************************************ 00:05:41.159 START TEST dd_flag_nofollow_forced_aio 00:05:41.159 ************************************ 00:05:41.159 19:39:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1129 -- # nofollow 00:05:41.159 19:39:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:05:41.159 19:39:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:05:41.159 19:39:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:05:41.159 19:39:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:05:41.160 19:39:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:41.160 19:39:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:05:41.160 19:39:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:41.160 19:39:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:41.160 19:39:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:41.160 19:39:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:41.160 19:39:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:41.160 19:39:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:41.160 19:39:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:41.160 19:39:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:41.160 19:39:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:41.160 19:39:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:41.160 [2024-11-26 19:39:36.300025] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:41.160 [2024-11-26 19:39:36.300085] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59606 ] 00:05:41.417 [2024-11-26 19:39:36.435641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.417 [2024-11-26 19:39:36.469561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.417 [2024-11-26 19:39:36.501522] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:41.417 [2024-11-26 19:39:36.526521] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:05:41.417 [2024-11-26 19:39:36.526567] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:05:41.417 [2024-11-26 19:39:36.526577] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:41.417 [2024-11-26 19:39:36.591937] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:41.417 19:39:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:05:41.417 19:39:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:41.417 19:39:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:05:41.417 19:39:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:05:41.417 19:39:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:05:41.417 19:39:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:41.417 19:39:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:05:41.417 19:39:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:05:41.417 19:39:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:05:41.417 19:39:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:41.417 19:39:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:41.417 19:39:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:41.417 19:39:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:41.417 19:39:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:41.417 19:39:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:41.417 19:39:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:41.417 19:39:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:41.417 19:39:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:05:41.675 [2024-11-26 19:39:36.664890] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:41.675 [2024-11-26 19:39:36.664948] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59610 ] 00:05:41.675 [2024-11-26 19:39:36.799480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.675 [2024-11-26 19:39:36.836948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.675 [2024-11-26 19:39:36.869318] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:41.675 [2024-11-26 19:39:36.894242] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:05:41.675 [2024-11-26 19:39:36.894277] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:05:41.675 [2024-11-26 19:39:36.894286] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:41.935 [2024-11-26 19:39:36.960013] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:41.935 19:39:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:05:41.935 19:39:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:41.935 19:39:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:05:41.935 19:39:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:05:41.935 19:39:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:05:41.935 19:39:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:41.936 19:39:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:05:41.936 19:39:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:05:41.936 19:39:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:41.936 19:39:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:41.936 [2024-11-26 19:39:37.041573] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:41.936 [2024-11-26 19:39:37.041636] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59617 ] 00:05:41.936 [2024-11-26 19:39:37.175682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.195 [2024-11-26 19:39:37.210715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.195 [2024-11-26 19:39:37.242667] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:42.195  [2024-11-26T19:39:37.442Z] Copying: 512/512 [B] (average 500 kBps) 00:05:42.195 00:05:42.195 ************************************ 00:05:42.195 END TEST dd_flag_nofollow_forced_aio 00:05:42.195 ************************************ 00:05:42.195 19:39:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ sipec5dc6g3l6e6zt7r0m2vu6zbxgbd5b128arv28glo4sdlttuqc0h6v16o9b79cu4cib9y0t7f8donjoygodt4fe7sv5rws5u22cxcoxlcmx5do6osspx1464j6a098ui1hqek9fszcqsswcp8lev3jjs8pqlugztmyvoa0ea429ju6md1dszidy9hjawbngtf3gei1m79vw4fkjw0kydld6s29esmgbxpk7mwhkm0gprtnbvuutvx5ox0uv0mqxb84jhgg073lkjcvcqaco6qjxwlv1lk0itsurgwc9ueiir1y60mx2kykrqhdihty6e0wikrcbxlz0zp0p9ym9sro1789l5ppch6u1kh4qz00sz5ohtestvwkpc3hh3taxpiappa4zc3ivefvesvkxp0keu98emv1wfxpwqjw5wcb7uq3133tcuo39fz688md92n47h0x2r7rlyfa0c68e0un86knsirv6ckxiiwjhck23rwzw48lvhnombpxl8o == \s\i\p\e\c\5\d\c\6\g\3\l\6\e\6\z\t\7\r\0\m\2\v\u\6\z\b\x\g\b\d\5\b\1\2\8\a\r\v\2\8\g\l\o\4\s\d\l\t\t\u\q\c\0\h\6\v\1\6\o\9\b\7\9\c\u\4\c\i\b\9\y\0\t\7\f\8\d\o\n\j\o\y\g\o\d\t\4\f\e\7\s\v\5\r\w\s\5\u\2\2\c\x\c\o\x\l\c\m\x\5\d\o\6\o\s\s\p\x\1\4\6\4\j\6\a\0\9\8\u\i\1\h\q\e\k\9\f\s\z\c\q\s\s\w\c\p\8\l\e\v\3\j\j\s\8\p\q\l\u\g\z\t\m\y\v\o\a\0\e\a\4\2\9\j\u\6\m\d\1\d\s\z\i\d\y\9\h\j\a\w\b\n\g\t\f\3\g\e\i\1\m\7\9\v\w\4\f\k\j\w\0\k\y\d\l\d\6\s\2\9\e\s\m\g\b\x\p\k\7\m\w\h\k\m\0\g\p\r\t\n\b\v\u\u\t\v\x\5\o\x\0\u\v\0\m\q\x\b\8\4\j\h\g\g\0\7\3\l\k\j\c\v\c\q\a\c\o\6\q\j\x\w\l\v\1\l\k\0\i\t\s\u\r\g\w\c\9\u\e\i\i\r\1\y\6\0\m\x\2\k\y\k\r\q\h\d\i\h\t\y\6\e\0\w\i\k\r\c\b\x\l\z\0\z\p\0\p\9\y\m\9\s\r\o\1\7\8\9\l\5\p\p\c\h\6\u\1\k\h\4\q\z\0\0\s\z\5\o\h\t\e\s\t\v\w\k\p\c\3\h\h\3\t\a\x\p\i\a\p\p\a\4\z\c\3\i\v\e\f\v\e\s\v\k\x\p\0\k\e\u\9\8\e\m\v\1\w\f\x\p\w\q\j\w\5\w\c\b\7\u\q\3\1\3\3\t\c\u\o\3\9\f\z\6\8\8\m\d\9\2\n\4\7\h\0\x\2\r\7\r\l\y\f\a\0\c\6\8\e\0\u\n\8\6\k\n\s\i\r\v\6\c\k\x\i\i\w\j\h\c\k\2\3\r\w\z\w\4\8\l\v\h\n\o\m\b\p\x\l\8\o ]] 00:05:42.195 00:05:42.195 real 0m1.131s 00:05:42.195 user 0m0.554s 00:05:42.195 sys 0m0.252s 00:05:42.195 19:39:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.196 19:39:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:42.196 19:39:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:05:42.196 19:39:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.196 19:39:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.196 19:39:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:42.196 ************************************ 00:05:42.196 START TEST dd_flag_noatime_forced_aio 00:05:42.196 ************************************ 00:05:42.196 19:39:37 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1129 -- # noatime 00:05:42.196 19:39:37 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:05:42.196 19:39:37 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:05:42.196 19:39:37 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:05:42.196 19:39:37 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:05:42.196 19:39:37 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:42.455 19:39:37 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:42.455 19:39:37 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1732649977 00:05:42.455 19:39:37 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:42.455 19:39:37 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1732649977 00:05:42.455 19:39:37 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:05:43.386 19:39:38 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:43.386 [2024-11-26 19:39:38.488484] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:43.386 [2024-11-26 19:39:38.488692] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59658 ] 00:05:43.386 [2024-11-26 19:39:38.623712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.645 [2024-11-26 19:39:38.658608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.645 [2024-11-26 19:39:38.690117] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:43.645  [2024-11-26T19:39:38.892Z] Copying: 512/512 [B] (average 500 kBps) 00:05:43.645 00:05:43.645 19:39:38 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:43.645 19:39:38 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1732649977 )) 00:05:43.645 19:39:38 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:43.645 19:39:38 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1732649977 )) 00:05:43.645 19:39:38 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:43.645 [2024-11-26 19:39:38.883592] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:43.645 [2024-11-26 19:39:38.883663] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59669 ] 00:05:43.902 [2024-11-26 19:39:39.014677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.903 [2024-11-26 19:39:39.049201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.903 [2024-11-26 19:39:39.080603] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:43.903  [2024-11-26T19:39:39.406Z] Copying: 512/512 [B] (average 500 kBps) 00:05:44.159 00:05:44.160 19:39:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:44.160 19:39:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1732649979 )) 00:05:44.160 00:05:44.160 real 0m1.796s 00:05:44.160 user 0m0.371s 00:05:44.160 sys 0m0.186s 00:05:44.160 ************************************ 00:05:44.160 END TEST dd_flag_noatime_forced_aio 00:05:44.160 ************************************ 00:05:44.160 19:39:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.160 19:39:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:44.160 19:39:39 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:05:44.160 19:39:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.160 19:39:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.160 19:39:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:44.160 ************************************ 00:05:44.160 START TEST dd_flags_misc_forced_aio 00:05:44.160 ************************************ 00:05:44.160 19:39:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1129 -- # io 00:05:44.160 19:39:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:05:44.160 19:39:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:05:44.160 19:39:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:05:44.160 19:39:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:05:44.160 19:39:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:05:44.160 19:39:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:05:44.160 19:39:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:44.160 19:39:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:44.160 19:39:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:05:44.160 [2024-11-26 19:39:39.315001] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:44.160 [2024-11-26 19:39:39.315066] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59696 ] 00:05:44.416 [2024-11-26 19:39:39.454194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.416 [2024-11-26 19:39:39.491377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.416 [2024-11-26 19:39:39.523793] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:44.416  [2024-11-26T19:39:39.920Z] Copying: 512/512 [B] (average 500 kBps) 00:05:44.673 00:05:44.673 19:39:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ zjcajg8hau9picic7e6l95pws5y1g9vl0hja1ylkvoevjgasv9cq45lh4qsve3f80bh75m5pmundd64lv5mnhmwj0rfke18p3z4i0rp53kgjhk67hpzrnu80r0tgk1yhuhnlwcxhq9nisdexmdh7zxiqn4870kcjye1vqihr3sbb4ijqkpwbdlahfxfbie5easp5rtu14tsw70mifjck9frr28en2whlh09t13tayfjxi2qw1rfcgmbdb9il3691ecru8pnsc2y4jiok84efn840i7vuyv98tw72318f1vy7yxzif5ip6ck62eats1j9brqaa3df7c47o1nbm256tyxl7pju1r3bqi61l81rigsmcht9vmkjz2lf601a23lftvmtm46jzfu7m2np1t3nlr2dftn6175ur7puo20e69pgeofqtrl4pbmfmx2vnd3dag1v3jq0pgv18ulvj4slgob240lvqsvkmuk1yb0eyxvmnv6n1mjlsk9iwlda9pqz == \z\j\c\a\j\g\8\h\a\u\9\p\i\c\i\c\7\e\6\l\9\5\p\w\s\5\y\1\g\9\v\l\0\h\j\a\1\y\l\k\v\o\e\v\j\g\a\s\v\9\c\q\4\5\l\h\4\q\s\v\e\3\f\8\0\b\h\7\5\m\5\p\m\u\n\d\d\6\4\l\v\5\m\n\h\m\w\j\0\r\f\k\e\1\8\p\3\z\4\i\0\r\p\5\3\k\g\j\h\k\6\7\h\p\z\r\n\u\8\0\r\0\t\g\k\1\y\h\u\h\n\l\w\c\x\h\q\9\n\i\s\d\e\x\m\d\h\7\z\x\i\q\n\4\8\7\0\k\c\j\y\e\1\v\q\i\h\r\3\s\b\b\4\i\j\q\k\p\w\b\d\l\a\h\f\x\f\b\i\e\5\e\a\s\p\5\r\t\u\1\4\t\s\w\7\0\m\i\f\j\c\k\9\f\r\r\2\8\e\n\2\w\h\l\h\0\9\t\1\3\t\a\y\f\j\x\i\2\q\w\1\r\f\c\g\m\b\d\b\9\i\l\3\6\9\1\e\c\r\u\8\p\n\s\c\2\y\4\j\i\o\k\8\4\e\f\n\8\4\0\i\7\v\u\y\v\9\8\t\w\7\2\3\1\8\f\1\v\y\7\y\x\z\i\f\5\i\p\6\c\k\6\2\e\a\t\s\1\j\9\b\r\q\a\a\3\d\f\7\c\4\7\o\1\n\b\m\2\5\6\t\y\x\l\7\p\j\u\1\r\3\b\q\i\6\1\l\8\1\r\i\g\s\m\c\h\t\9\v\m\k\j\z\2\l\f\6\0\1\a\2\3\l\f\t\v\m\t\m\4\6\j\z\f\u\7\m\2\n\p\1\t\3\n\l\r\2\d\f\t\n\6\1\7\5\u\r\7\p\u\o\2\0\e\6\9\p\g\e\o\f\q\t\r\l\4\p\b\m\f\m\x\2\v\n\d\3\d\a\g\1\v\3\j\q\0\p\g\v\1\8\u\l\v\j\4\s\l\g\o\b\2\4\0\l\v\q\s\v\k\m\u\k\1\y\b\0\e\y\x\v\m\n\v\6\n\1\m\j\l\s\k\9\i\w\l\d\a\9\p\q\z ]] 00:05:44.673 19:39:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:44.673 19:39:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:05:44.673 [2024-11-26 19:39:39.708761] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:44.673 [2024-11-26 19:39:39.708840] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59698 ] 00:05:44.673 [2024-11-26 19:39:39.846283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.673 [2024-11-26 19:39:39.883911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.931 [2024-11-26 19:39:39.918628] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:44.931  [2024-11-26T19:39:40.178Z] Copying: 512/512 [B] (average 500 kBps) 00:05:44.931 00:05:44.931 19:39:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ zjcajg8hau9picic7e6l95pws5y1g9vl0hja1ylkvoevjgasv9cq45lh4qsve3f80bh75m5pmundd64lv5mnhmwj0rfke18p3z4i0rp53kgjhk67hpzrnu80r0tgk1yhuhnlwcxhq9nisdexmdh7zxiqn4870kcjye1vqihr3sbb4ijqkpwbdlahfxfbie5easp5rtu14tsw70mifjck9frr28en2whlh09t13tayfjxi2qw1rfcgmbdb9il3691ecru8pnsc2y4jiok84efn840i7vuyv98tw72318f1vy7yxzif5ip6ck62eats1j9brqaa3df7c47o1nbm256tyxl7pju1r3bqi61l81rigsmcht9vmkjz2lf601a23lftvmtm46jzfu7m2np1t3nlr2dftn6175ur7puo20e69pgeofqtrl4pbmfmx2vnd3dag1v3jq0pgv18ulvj4slgob240lvqsvkmuk1yb0eyxvmnv6n1mjlsk9iwlda9pqz == \z\j\c\a\j\g\8\h\a\u\9\p\i\c\i\c\7\e\6\l\9\5\p\w\s\5\y\1\g\9\v\l\0\h\j\a\1\y\l\k\v\o\e\v\j\g\a\s\v\9\c\q\4\5\l\h\4\q\s\v\e\3\f\8\0\b\h\7\5\m\5\p\m\u\n\d\d\6\4\l\v\5\m\n\h\m\w\j\0\r\f\k\e\1\8\p\3\z\4\i\0\r\p\5\3\k\g\j\h\k\6\7\h\p\z\r\n\u\8\0\r\0\t\g\k\1\y\h\u\h\n\l\w\c\x\h\q\9\n\i\s\d\e\x\m\d\h\7\z\x\i\q\n\4\8\7\0\k\c\j\y\e\1\v\q\i\h\r\3\s\b\b\4\i\j\q\k\p\w\b\d\l\a\h\f\x\f\b\i\e\5\e\a\s\p\5\r\t\u\1\4\t\s\w\7\0\m\i\f\j\c\k\9\f\r\r\2\8\e\n\2\w\h\l\h\0\9\t\1\3\t\a\y\f\j\x\i\2\q\w\1\r\f\c\g\m\b\d\b\9\i\l\3\6\9\1\e\c\r\u\8\p\n\s\c\2\y\4\j\i\o\k\8\4\e\f\n\8\4\0\i\7\v\u\y\v\9\8\t\w\7\2\3\1\8\f\1\v\y\7\y\x\z\i\f\5\i\p\6\c\k\6\2\e\a\t\s\1\j\9\b\r\q\a\a\3\d\f\7\c\4\7\o\1\n\b\m\2\5\6\t\y\x\l\7\p\j\u\1\r\3\b\q\i\6\1\l\8\1\r\i\g\s\m\c\h\t\9\v\m\k\j\z\2\l\f\6\0\1\a\2\3\l\f\t\v\m\t\m\4\6\j\z\f\u\7\m\2\n\p\1\t\3\n\l\r\2\d\f\t\n\6\1\7\5\u\r\7\p\u\o\2\0\e\6\9\p\g\e\o\f\q\t\r\l\4\p\b\m\f\m\x\2\v\n\d\3\d\a\g\1\v\3\j\q\0\p\g\v\1\8\u\l\v\j\4\s\l\g\o\b\2\4\0\l\v\q\s\v\k\m\u\k\1\y\b\0\e\y\x\v\m\n\v\6\n\1\m\j\l\s\k\9\i\w\l\d\a\9\p\q\z ]] 00:05:44.931 19:39:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:44.931 19:39:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:05:44.931 [2024-11-26 19:39:40.111343] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:44.931 [2024-11-26 19:39:40.111406] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59705 ] 00:05:45.189 [2024-11-26 19:39:40.249297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.189 [2024-11-26 19:39:40.287391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.189 [2024-11-26 19:39:40.320555] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:45.189  [2024-11-26T19:39:40.693Z] Copying: 512/512 [B] (average 250 kBps) 00:05:45.446 00:05:45.446 19:39:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ zjcajg8hau9picic7e6l95pws5y1g9vl0hja1ylkvoevjgasv9cq45lh4qsve3f80bh75m5pmundd64lv5mnhmwj0rfke18p3z4i0rp53kgjhk67hpzrnu80r0tgk1yhuhnlwcxhq9nisdexmdh7zxiqn4870kcjye1vqihr3sbb4ijqkpwbdlahfxfbie5easp5rtu14tsw70mifjck9frr28en2whlh09t13tayfjxi2qw1rfcgmbdb9il3691ecru8pnsc2y4jiok84efn840i7vuyv98tw72318f1vy7yxzif5ip6ck62eats1j9brqaa3df7c47o1nbm256tyxl7pju1r3bqi61l81rigsmcht9vmkjz2lf601a23lftvmtm46jzfu7m2np1t3nlr2dftn6175ur7puo20e69pgeofqtrl4pbmfmx2vnd3dag1v3jq0pgv18ulvj4slgob240lvqsvkmuk1yb0eyxvmnv6n1mjlsk9iwlda9pqz == \z\j\c\a\j\g\8\h\a\u\9\p\i\c\i\c\7\e\6\l\9\5\p\w\s\5\y\1\g\9\v\l\0\h\j\a\1\y\l\k\v\o\e\v\j\g\a\s\v\9\c\q\4\5\l\h\4\q\s\v\e\3\f\8\0\b\h\7\5\m\5\p\m\u\n\d\d\6\4\l\v\5\m\n\h\m\w\j\0\r\f\k\e\1\8\p\3\z\4\i\0\r\p\5\3\k\g\j\h\k\6\7\h\p\z\r\n\u\8\0\r\0\t\g\k\1\y\h\u\h\n\l\w\c\x\h\q\9\n\i\s\d\e\x\m\d\h\7\z\x\i\q\n\4\8\7\0\k\c\j\y\e\1\v\q\i\h\r\3\s\b\b\4\i\j\q\k\p\w\b\d\l\a\h\f\x\f\b\i\e\5\e\a\s\p\5\r\t\u\1\4\t\s\w\7\0\m\i\f\j\c\k\9\f\r\r\2\8\e\n\2\w\h\l\h\0\9\t\1\3\t\a\y\f\j\x\i\2\q\w\1\r\f\c\g\m\b\d\b\9\i\l\3\6\9\1\e\c\r\u\8\p\n\s\c\2\y\4\j\i\o\k\8\4\e\f\n\8\4\0\i\7\v\u\y\v\9\8\t\w\7\2\3\1\8\f\1\v\y\7\y\x\z\i\f\5\i\p\6\c\k\6\2\e\a\t\s\1\j\9\b\r\q\a\a\3\d\f\7\c\4\7\o\1\n\b\m\2\5\6\t\y\x\l\7\p\j\u\1\r\3\b\q\i\6\1\l\8\1\r\i\g\s\m\c\h\t\9\v\m\k\j\z\2\l\f\6\0\1\a\2\3\l\f\t\v\m\t\m\4\6\j\z\f\u\7\m\2\n\p\1\t\3\n\l\r\2\d\f\t\n\6\1\7\5\u\r\7\p\u\o\2\0\e\6\9\p\g\e\o\f\q\t\r\l\4\p\b\m\f\m\x\2\v\n\d\3\d\a\g\1\v\3\j\q\0\p\g\v\1\8\u\l\v\j\4\s\l\g\o\b\2\4\0\l\v\q\s\v\k\m\u\k\1\y\b\0\e\y\x\v\m\n\v\6\n\1\m\j\l\s\k\9\i\w\l\d\a\9\p\q\z ]] 00:05:45.446 19:39:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:45.446 19:39:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:05:45.446 [2024-11-26 19:39:40.510611] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:45.446 [2024-11-26 19:39:40.510678] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59713 ] 00:05:45.446 [2024-11-26 19:39:40.647691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.446 [2024-11-26 19:39:40.685681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.704 [2024-11-26 19:39:40.719239] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:45.704  [2024-11-26T19:39:40.951Z] Copying: 512/512 [B] (average 500 kBps) 00:05:45.704 00:05:45.704 19:39:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ zjcajg8hau9picic7e6l95pws5y1g9vl0hja1ylkvoevjgasv9cq45lh4qsve3f80bh75m5pmundd64lv5mnhmwj0rfke18p3z4i0rp53kgjhk67hpzrnu80r0tgk1yhuhnlwcxhq9nisdexmdh7zxiqn4870kcjye1vqihr3sbb4ijqkpwbdlahfxfbie5easp5rtu14tsw70mifjck9frr28en2whlh09t13tayfjxi2qw1rfcgmbdb9il3691ecru8pnsc2y4jiok84efn840i7vuyv98tw72318f1vy7yxzif5ip6ck62eats1j9brqaa3df7c47o1nbm256tyxl7pju1r3bqi61l81rigsmcht9vmkjz2lf601a23lftvmtm46jzfu7m2np1t3nlr2dftn6175ur7puo20e69pgeofqtrl4pbmfmx2vnd3dag1v3jq0pgv18ulvj4slgob240lvqsvkmuk1yb0eyxvmnv6n1mjlsk9iwlda9pqz == \z\j\c\a\j\g\8\h\a\u\9\p\i\c\i\c\7\e\6\l\9\5\p\w\s\5\y\1\g\9\v\l\0\h\j\a\1\y\l\k\v\o\e\v\j\g\a\s\v\9\c\q\4\5\l\h\4\q\s\v\e\3\f\8\0\b\h\7\5\m\5\p\m\u\n\d\d\6\4\l\v\5\m\n\h\m\w\j\0\r\f\k\e\1\8\p\3\z\4\i\0\r\p\5\3\k\g\j\h\k\6\7\h\p\z\r\n\u\8\0\r\0\t\g\k\1\y\h\u\h\n\l\w\c\x\h\q\9\n\i\s\d\e\x\m\d\h\7\z\x\i\q\n\4\8\7\0\k\c\j\y\e\1\v\q\i\h\r\3\s\b\b\4\i\j\q\k\p\w\b\d\l\a\h\f\x\f\b\i\e\5\e\a\s\p\5\r\t\u\1\4\t\s\w\7\0\m\i\f\j\c\k\9\f\r\r\2\8\e\n\2\w\h\l\h\0\9\t\1\3\t\a\y\f\j\x\i\2\q\w\1\r\f\c\g\m\b\d\b\9\i\l\3\6\9\1\e\c\r\u\8\p\n\s\c\2\y\4\j\i\o\k\8\4\e\f\n\8\4\0\i\7\v\u\y\v\9\8\t\w\7\2\3\1\8\f\1\v\y\7\y\x\z\i\f\5\i\p\6\c\k\6\2\e\a\t\s\1\j\9\b\r\q\a\a\3\d\f\7\c\4\7\o\1\n\b\m\2\5\6\t\y\x\l\7\p\j\u\1\r\3\b\q\i\6\1\l\8\1\r\i\g\s\m\c\h\t\9\v\m\k\j\z\2\l\f\6\0\1\a\2\3\l\f\t\v\m\t\m\4\6\j\z\f\u\7\m\2\n\p\1\t\3\n\l\r\2\d\f\t\n\6\1\7\5\u\r\7\p\u\o\2\0\e\6\9\p\g\e\o\f\q\t\r\l\4\p\b\m\f\m\x\2\v\n\d\3\d\a\g\1\v\3\j\q\0\p\g\v\1\8\u\l\v\j\4\s\l\g\o\b\2\4\0\l\v\q\s\v\k\m\u\k\1\y\b\0\e\y\x\v\m\n\v\6\n\1\m\j\l\s\k\9\i\w\l\d\a\9\p\q\z ]] 00:05:45.704 19:39:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:05:45.704 19:39:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:05:45.704 19:39:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:05:45.704 19:39:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:45.704 19:39:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:45.704 19:39:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:05:45.704 [2024-11-26 19:39:40.919794] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:45.704 [2024-11-26 19:39:40.919854] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59720 ] 00:05:45.962 [2024-11-26 19:39:41.059555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.963 [2024-11-26 19:39:41.098278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.963 [2024-11-26 19:39:41.132133] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:45.963  [2024-11-26T19:39:41.466Z] Copying: 512/512 [B] (average 500 kBps) 00:05:46.219 00:05:46.220 19:39:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ e5fz55o1dfznvx574sxp620anwnmzagw6h7adjsmcxd8ofm6y6ymupkhogw9i31zjl2f5sn8wyczt7wo4ak24zl04nyuhna2nj0y97qic4ehbmbleaxgjki6qz7wjjuo9zigvvi3qqm64msq3bj77w24zlaha5uxkegl5r4awgyx4dlihweqqyvb5abrltz95utwb3l6es7c5i5pew5nd8eggyqlwkyp1p5bzsf9jymcx5e7qf4xuyh166rlydxfr0zhito2lfxlmyrgyxwga57m1xqmimsq8y6h2354g6gbmokdavytxul32prjx0dyuesea52h5zmlaf2zqu4o3mtllm1gy9bakp0nnzygt5i0pa66ww74b5e8xjblqxpygqz69jgahtletof8bisxuoadmjg2r8105809wulzz1dtxtx0q4h22gxjrb2qmw9o4lf7ez1um4hjqkfbcwqcqzh7nq9b80yb3a4fw54si3qsnimsyb5s5mcchooqchi4 == \e\5\f\z\5\5\o\1\d\f\z\n\v\x\5\7\4\s\x\p\6\2\0\a\n\w\n\m\z\a\g\w\6\h\7\a\d\j\s\m\c\x\d\8\o\f\m\6\y\6\y\m\u\p\k\h\o\g\w\9\i\3\1\z\j\l\2\f\5\s\n\8\w\y\c\z\t\7\w\o\4\a\k\2\4\z\l\0\4\n\y\u\h\n\a\2\n\j\0\y\9\7\q\i\c\4\e\h\b\m\b\l\e\a\x\g\j\k\i\6\q\z\7\w\j\j\u\o\9\z\i\g\v\v\i\3\q\q\m\6\4\m\s\q\3\b\j\7\7\w\2\4\z\l\a\h\a\5\u\x\k\e\g\l\5\r\4\a\w\g\y\x\4\d\l\i\h\w\e\q\q\y\v\b\5\a\b\r\l\t\z\9\5\u\t\w\b\3\l\6\e\s\7\c\5\i\5\p\e\w\5\n\d\8\e\g\g\y\q\l\w\k\y\p\1\p\5\b\z\s\f\9\j\y\m\c\x\5\e\7\q\f\4\x\u\y\h\1\6\6\r\l\y\d\x\f\r\0\z\h\i\t\o\2\l\f\x\l\m\y\r\g\y\x\w\g\a\5\7\m\1\x\q\m\i\m\s\q\8\y\6\h\2\3\5\4\g\6\g\b\m\o\k\d\a\v\y\t\x\u\l\3\2\p\r\j\x\0\d\y\u\e\s\e\a\5\2\h\5\z\m\l\a\f\2\z\q\u\4\o\3\m\t\l\l\m\1\g\y\9\b\a\k\p\0\n\n\z\y\g\t\5\i\0\p\a\6\6\w\w\7\4\b\5\e\8\x\j\b\l\q\x\p\y\g\q\z\6\9\j\g\a\h\t\l\e\t\o\f\8\b\i\s\x\u\o\a\d\m\j\g\2\r\8\1\0\5\8\0\9\w\u\l\z\z\1\d\t\x\t\x\0\q\4\h\2\2\g\x\j\r\b\2\q\m\w\9\o\4\l\f\7\e\z\1\u\m\4\h\j\q\k\f\b\c\w\q\c\q\z\h\7\n\q\9\b\8\0\y\b\3\a\4\f\w\5\4\s\i\3\q\s\n\i\m\s\y\b\5\s\5\m\c\c\h\o\o\q\c\h\i\4 ]] 00:05:46.220 19:39:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:46.220 19:39:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:05:46.220 [2024-11-26 19:39:41.322512] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:46.220 [2024-11-26 19:39:41.322604] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59728 ] 00:05:46.220 [2024-11-26 19:39:41.459824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.477 [2024-11-26 19:39:41.498486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.477 [2024-11-26 19:39:41.532653] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:46.477  [2024-11-26T19:39:41.724Z] Copying: 512/512 [B] (average 500 kBps) 00:05:46.477 00:05:46.477 19:39:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ e5fz55o1dfznvx574sxp620anwnmzagw6h7adjsmcxd8ofm6y6ymupkhogw9i31zjl2f5sn8wyczt7wo4ak24zl04nyuhna2nj0y97qic4ehbmbleaxgjki6qz7wjjuo9zigvvi3qqm64msq3bj77w24zlaha5uxkegl5r4awgyx4dlihweqqyvb5abrltz95utwb3l6es7c5i5pew5nd8eggyqlwkyp1p5bzsf9jymcx5e7qf4xuyh166rlydxfr0zhito2lfxlmyrgyxwga57m1xqmimsq8y6h2354g6gbmokdavytxul32prjx0dyuesea52h5zmlaf2zqu4o3mtllm1gy9bakp0nnzygt5i0pa66ww74b5e8xjblqxpygqz69jgahtletof8bisxuoadmjg2r8105809wulzz1dtxtx0q4h22gxjrb2qmw9o4lf7ez1um4hjqkfbcwqcqzh7nq9b80yb3a4fw54si3qsnimsyb5s5mcchooqchi4 == \e\5\f\z\5\5\o\1\d\f\z\n\v\x\5\7\4\s\x\p\6\2\0\a\n\w\n\m\z\a\g\w\6\h\7\a\d\j\s\m\c\x\d\8\o\f\m\6\y\6\y\m\u\p\k\h\o\g\w\9\i\3\1\z\j\l\2\f\5\s\n\8\w\y\c\z\t\7\w\o\4\a\k\2\4\z\l\0\4\n\y\u\h\n\a\2\n\j\0\y\9\7\q\i\c\4\e\h\b\m\b\l\e\a\x\g\j\k\i\6\q\z\7\w\j\j\u\o\9\z\i\g\v\v\i\3\q\q\m\6\4\m\s\q\3\b\j\7\7\w\2\4\z\l\a\h\a\5\u\x\k\e\g\l\5\r\4\a\w\g\y\x\4\d\l\i\h\w\e\q\q\y\v\b\5\a\b\r\l\t\z\9\5\u\t\w\b\3\l\6\e\s\7\c\5\i\5\p\e\w\5\n\d\8\e\g\g\y\q\l\w\k\y\p\1\p\5\b\z\s\f\9\j\y\m\c\x\5\e\7\q\f\4\x\u\y\h\1\6\6\r\l\y\d\x\f\r\0\z\h\i\t\o\2\l\f\x\l\m\y\r\g\y\x\w\g\a\5\7\m\1\x\q\m\i\m\s\q\8\y\6\h\2\3\5\4\g\6\g\b\m\o\k\d\a\v\y\t\x\u\l\3\2\p\r\j\x\0\d\y\u\e\s\e\a\5\2\h\5\z\m\l\a\f\2\z\q\u\4\o\3\m\t\l\l\m\1\g\y\9\b\a\k\p\0\n\n\z\y\g\t\5\i\0\p\a\6\6\w\w\7\4\b\5\e\8\x\j\b\l\q\x\p\y\g\q\z\6\9\j\g\a\h\t\l\e\t\o\f\8\b\i\s\x\u\o\a\d\m\j\g\2\r\8\1\0\5\8\0\9\w\u\l\z\z\1\d\t\x\t\x\0\q\4\h\2\2\g\x\j\r\b\2\q\m\w\9\o\4\l\f\7\e\z\1\u\m\4\h\j\q\k\f\b\c\w\q\c\q\z\h\7\n\q\9\b\8\0\y\b\3\a\4\f\w\5\4\s\i\3\q\s\n\i\m\s\y\b\5\s\5\m\c\c\h\o\o\q\c\h\i\4 ]] 00:05:46.477 19:39:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:46.477 19:39:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:05:46.746 [2024-11-26 19:39:41.723627] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:46.746 [2024-11-26 19:39:41.723822] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59730 ] 00:05:46.746 [2024-11-26 19:39:41.860933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.746 [2024-11-26 19:39:41.899667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.746 [2024-11-26 19:39:41.933089] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:46.746  [2024-11-26T19:39:42.287Z] Copying: 512/512 [B] (average 500 kBps) 00:05:47.040 00:05:47.040 19:39:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ e5fz55o1dfznvx574sxp620anwnmzagw6h7adjsmcxd8ofm6y6ymupkhogw9i31zjl2f5sn8wyczt7wo4ak24zl04nyuhna2nj0y97qic4ehbmbleaxgjki6qz7wjjuo9zigvvi3qqm64msq3bj77w24zlaha5uxkegl5r4awgyx4dlihweqqyvb5abrltz95utwb3l6es7c5i5pew5nd8eggyqlwkyp1p5bzsf9jymcx5e7qf4xuyh166rlydxfr0zhito2lfxlmyrgyxwga57m1xqmimsq8y6h2354g6gbmokdavytxul32prjx0dyuesea52h5zmlaf2zqu4o3mtllm1gy9bakp0nnzygt5i0pa66ww74b5e8xjblqxpygqz69jgahtletof8bisxuoadmjg2r8105809wulzz1dtxtx0q4h22gxjrb2qmw9o4lf7ez1um4hjqkfbcwqcqzh7nq9b80yb3a4fw54si3qsnimsyb5s5mcchooqchi4 == \e\5\f\z\5\5\o\1\d\f\z\n\v\x\5\7\4\s\x\p\6\2\0\a\n\w\n\m\z\a\g\w\6\h\7\a\d\j\s\m\c\x\d\8\o\f\m\6\y\6\y\m\u\p\k\h\o\g\w\9\i\3\1\z\j\l\2\f\5\s\n\8\w\y\c\z\t\7\w\o\4\a\k\2\4\z\l\0\4\n\y\u\h\n\a\2\n\j\0\y\9\7\q\i\c\4\e\h\b\m\b\l\e\a\x\g\j\k\i\6\q\z\7\w\j\j\u\o\9\z\i\g\v\v\i\3\q\q\m\6\4\m\s\q\3\b\j\7\7\w\2\4\z\l\a\h\a\5\u\x\k\e\g\l\5\r\4\a\w\g\y\x\4\d\l\i\h\w\e\q\q\y\v\b\5\a\b\r\l\t\z\9\5\u\t\w\b\3\l\6\e\s\7\c\5\i\5\p\e\w\5\n\d\8\e\g\g\y\q\l\w\k\y\p\1\p\5\b\z\s\f\9\j\y\m\c\x\5\e\7\q\f\4\x\u\y\h\1\6\6\r\l\y\d\x\f\r\0\z\h\i\t\o\2\l\f\x\l\m\y\r\g\y\x\w\g\a\5\7\m\1\x\q\m\i\m\s\q\8\y\6\h\2\3\5\4\g\6\g\b\m\o\k\d\a\v\y\t\x\u\l\3\2\p\r\j\x\0\d\y\u\e\s\e\a\5\2\h\5\z\m\l\a\f\2\z\q\u\4\o\3\m\t\l\l\m\1\g\y\9\b\a\k\p\0\n\n\z\y\g\t\5\i\0\p\a\6\6\w\w\7\4\b\5\e\8\x\j\b\l\q\x\p\y\g\q\z\6\9\j\g\a\h\t\l\e\t\o\f\8\b\i\s\x\u\o\a\d\m\j\g\2\r\8\1\0\5\8\0\9\w\u\l\z\z\1\d\t\x\t\x\0\q\4\h\2\2\g\x\j\r\b\2\q\m\w\9\o\4\l\f\7\e\z\1\u\m\4\h\j\q\k\f\b\c\w\q\c\q\z\h\7\n\q\9\b\8\0\y\b\3\a\4\f\w\5\4\s\i\3\q\s\n\i\m\s\y\b\5\s\5\m\c\c\h\o\o\q\c\h\i\4 ]] 00:05:47.040 19:39:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:05:47.040 19:39:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:05:47.040 [2024-11-26 19:39:42.122128] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:47.040 [2024-11-26 19:39:42.122191] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59737 ] 00:05:47.040 [2024-11-26 19:39:42.263103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.297 [2024-11-26 19:39:42.301157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.297 [2024-11-26 19:39:42.334728] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:47.297  [2024-11-26T19:39:42.544Z] Copying: 512/512 [B] (average 250 kBps) 00:05:47.297 00:05:47.297 19:39:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ e5fz55o1dfznvx574sxp620anwnmzagw6h7adjsmcxd8ofm6y6ymupkhogw9i31zjl2f5sn8wyczt7wo4ak24zl04nyuhna2nj0y97qic4ehbmbleaxgjki6qz7wjjuo9zigvvi3qqm64msq3bj77w24zlaha5uxkegl5r4awgyx4dlihweqqyvb5abrltz95utwb3l6es7c5i5pew5nd8eggyqlwkyp1p5bzsf9jymcx5e7qf4xuyh166rlydxfr0zhito2lfxlmyrgyxwga57m1xqmimsq8y6h2354g6gbmokdavytxul32prjx0dyuesea52h5zmlaf2zqu4o3mtllm1gy9bakp0nnzygt5i0pa66ww74b5e8xjblqxpygqz69jgahtletof8bisxuoadmjg2r8105809wulzz1dtxtx0q4h22gxjrb2qmw9o4lf7ez1um4hjqkfbcwqcqzh7nq9b80yb3a4fw54si3qsnimsyb5s5mcchooqchi4 == \e\5\f\z\5\5\o\1\d\f\z\n\v\x\5\7\4\s\x\p\6\2\0\a\n\w\n\m\z\a\g\w\6\h\7\a\d\j\s\m\c\x\d\8\o\f\m\6\y\6\y\m\u\p\k\h\o\g\w\9\i\3\1\z\j\l\2\f\5\s\n\8\w\y\c\z\t\7\w\o\4\a\k\2\4\z\l\0\4\n\y\u\h\n\a\2\n\j\0\y\9\7\q\i\c\4\e\h\b\m\b\l\e\a\x\g\j\k\i\6\q\z\7\w\j\j\u\o\9\z\i\g\v\v\i\3\q\q\m\6\4\m\s\q\3\b\j\7\7\w\2\4\z\l\a\h\a\5\u\x\k\e\g\l\5\r\4\a\w\g\y\x\4\d\l\i\h\w\e\q\q\y\v\b\5\a\b\r\l\t\z\9\5\u\t\w\b\3\l\6\e\s\7\c\5\i\5\p\e\w\5\n\d\8\e\g\g\y\q\l\w\k\y\p\1\p\5\b\z\s\f\9\j\y\m\c\x\5\e\7\q\f\4\x\u\y\h\1\6\6\r\l\y\d\x\f\r\0\z\h\i\t\o\2\l\f\x\l\m\y\r\g\y\x\w\g\a\5\7\m\1\x\q\m\i\m\s\q\8\y\6\h\2\3\5\4\g\6\g\b\m\o\k\d\a\v\y\t\x\u\l\3\2\p\r\j\x\0\d\y\u\e\s\e\a\5\2\h\5\z\m\l\a\f\2\z\q\u\4\o\3\m\t\l\l\m\1\g\y\9\b\a\k\p\0\n\n\z\y\g\t\5\i\0\p\a\6\6\w\w\7\4\b\5\e\8\x\j\b\l\q\x\p\y\g\q\z\6\9\j\g\a\h\t\l\e\t\o\f\8\b\i\s\x\u\o\a\d\m\j\g\2\r\8\1\0\5\8\0\9\w\u\l\z\z\1\d\t\x\t\x\0\q\4\h\2\2\g\x\j\r\b\2\q\m\w\9\o\4\l\f\7\e\z\1\u\m\4\h\j\q\k\f\b\c\w\q\c\q\z\h\7\n\q\9\b\8\0\y\b\3\a\4\f\w\5\4\s\i\3\q\s\n\i\m\s\y\b\5\s\5\m\c\c\h\o\o\q\c\h\i\4 ]] 00:05:47.297 00:05:47.297 real 0m3.227s 00:05:47.297 user 0m1.575s 00:05:47.297 sys 0m0.676s 00:05:47.297 ************************************ 00:05:47.297 END TEST dd_flags_misc_forced_aio 00:05:47.297 ************************************ 00:05:47.297 19:39:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.297 19:39:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:05:47.297 19:39:42 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:05:47.297 19:39:42 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:05:47.297 19:39:42 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:05:47.297 ************************************ 00:05:47.297 END TEST spdk_dd_posix 00:05:47.297 ************************************ 00:05:47.297 00:05:47.297 real 0m14.724s 00:05:47.297 user 0m6.168s 00:05:47.297 sys 0m3.846s 00:05:47.297 19:39:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.297 19:39:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:47.555 19:39:42 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:05:47.555 19:39:42 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.555 19:39:42 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.555 19:39:42 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:05:47.555 ************************************ 00:05:47.555 START TEST spdk_dd_malloc 00:05:47.555 ************************************ 00:05:47.555 19:39:42 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:05:47.555 * Looking for test storage... 00:05:47.555 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:47.555 19:39:42 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:47.555 19:39:42 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:47.555 19:39:42 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:47.555 19:39:42 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:47.555 19:39:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.555 19:39:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.555 19:39:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.555 19:39:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.555 19:39:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.555 19:39:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.555 19:39:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.555 19:39:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.555 19:39:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.555 19:39:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.555 19:39:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.555 19:39:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:05:47.555 19:39:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:05:47.555 19:39:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.555 19:39:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.555 19:39:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:05:47.555 19:39:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:05:47.555 19:39:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.555 19:39:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:05:47.555 19:39:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.555 19:39:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:05:47.555 19:39:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:05:47.555 19:39:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.555 19:39:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:05:47.555 19:39:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.555 19:39:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.555 19:39:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.555 19:39:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:05:47.555 19:39:42 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.555 19:39:42 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:47.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.555 --rc genhtml_branch_coverage=1 00:05:47.555 --rc genhtml_function_coverage=1 00:05:47.555 --rc genhtml_legend=1 00:05:47.555 --rc geninfo_all_blocks=1 00:05:47.555 --rc geninfo_unexecuted_blocks=1 00:05:47.555 00:05:47.555 ' 00:05:47.555 19:39:42 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:47.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.555 --rc genhtml_branch_coverage=1 00:05:47.555 --rc genhtml_function_coverage=1 00:05:47.555 --rc genhtml_legend=1 00:05:47.555 --rc geninfo_all_blocks=1 00:05:47.555 --rc geninfo_unexecuted_blocks=1 00:05:47.555 00:05:47.555 ' 00:05:47.555 19:39:42 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:47.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.555 --rc genhtml_branch_coverage=1 00:05:47.555 --rc genhtml_function_coverage=1 00:05:47.555 --rc genhtml_legend=1 00:05:47.555 --rc geninfo_all_blocks=1 00:05:47.555 --rc geninfo_unexecuted_blocks=1 00:05:47.555 00:05:47.555 ' 00:05:47.555 19:39:42 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:47.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.555 --rc genhtml_branch_coverage=1 00:05:47.555 --rc genhtml_function_coverage=1 00:05:47.555 --rc genhtml_legend=1 00:05:47.555 --rc geninfo_all_blocks=1 00:05:47.555 --rc geninfo_unexecuted_blocks=1 00:05:47.555 00:05:47.555 ' 00:05:47.555 19:39:42 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:47.555 19:39:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:05:47.555 19:39:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:47.555 19:39:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:47.555 19:39:42 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:47.555 19:39:42 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.555 19:39:42 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.555 19:39:42 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.555 19:39:42 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:05:47.556 19:39:42 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.556 19:39:42 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:05:47.556 19:39:42 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.556 19:39:42 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.556 19:39:42 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:05:47.556 ************************************ 00:05:47.556 START TEST dd_malloc_copy 00:05:47.556 ************************************ 00:05:47.556 19:39:42 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1129 -- # malloc_copy 00:05:47.556 19:39:42 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:05:47.556 19:39:42 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:05:47.556 19:39:42 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:05:47.556 19:39:42 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:05:47.556 19:39:42 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:05:47.556 19:39:42 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:05:47.556 19:39:42 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:05:47.556 19:39:42 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:05:47.556 19:39:42 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:05:47.556 19:39:42 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:05:47.556 { 00:05:47.556 "subsystems": [ 00:05:47.556 { 00:05:47.556 "subsystem": "bdev", 00:05:47.556 "config": [ 00:05:47.556 { 00:05:47.556 "params": { 00:05:47.556 "block_size": 512, 00:05:47.556 "num_blocks": 1048576, 00:05:47.556 "name": "malloc0" 00:05:47.556 }, 00:05:47.556 "method": "bdev_malloc_create" 00:05:47.556 }, 00:05:47.556 { 00:05:47.556 "params": { 00:05:47.556 "block_size": 512, 00:05:47.556 "num_blocks": 1048576, 00:05:47.556 "name": "malloc1" 00:05:47.556 }, 00:05:47.556 "method": "bdev_malloc_create" 00:05:47.556 }, 00:05:47.556 { 00:05:47.556 "method": "bdev_wait_for_examine" 00:05:47.556 } 00:05:47.556 ] 00:05:47.556 } 00:05:47.556 ] 00:05:47.556 } 00:05:47.556 [2024-11-26 19:39:42.754287] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:47.556 [2024-11-26 19:39:42.754351] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59814 ] 00:05:47.814 [2024-11-26 19:39:42.895106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.814 [2024-11-26 19:39:42.934706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.814 [2024-11-26 19:39:42.969504] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:49.192  [2024-11-26T19:39:45.369Z] Copying: 207/512 [MB] (207 MBps) [2024-11-26T19:39:45.958Z] Copying: 415/512 [MB] (207 MBps) [2024-11-26T19:39:45.958Z] Copying: 512/512 [MB] (average 207 MBps) 00:05:50.711 00:05:50.711 19:39:45 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:05:50.969 19:39:45 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:05:50.969 19:39:45 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:05:50.969 19:39:45 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:05:50.969 [2024-11-26 19:39:45.986620] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:50.969 [2024-11-26 19:39:45.986807] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59856 ] 00:05:50.969 { 00:05:50.969 "subsystems": [ 00:05:50.969 { 00:05:50.969 "subsystem": "bdev", 00:05:50.969 "config": [ 00:05:50.969 { 00:05:50.969 "params": { 00:05:50.969 "block_size": 512, 00:05:50.969 "num_blocks": 1048576, 00:05:50.969 "name": "malloc0" 00:05:50.969 }, 00:05:50.969 "method": "bdev_malloc_create" 00:05:50.969 }, 00:05:50.969 { 00:05:50.969 "params": { 00:05:50.969 "block_size": 512, 00:05:50.969 "num_blocks": 1048576, 00:05:50.969 "name": "malloc1" 00:05:50.969 }, 00:05:50.969 "method": "bdev_malloc_create" 00:05:50.969 }, 00:05:50.969 { 00:05:50.969 "method": "bdev_wait_for_examine" 00:05:50.969 } 00:05:50.969 ] 00:05:50.969 } 00:05:50.969 ] 00:05:50.969 } 00:05:50.969 [2024-11-26 19:39:46.122277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.969 [2024-11-26 19:39:46.158272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.969 [2024-11-26 19:39:46.188837] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:52.340  [2024-11-26T19:39:48.548Z] Copying: 207/512 [MB] (207 MBps) [2024-11-26T19:39:49.113Z] Copying: 413/512 [MB] (206 MBps) [2024-11-26T19:39:49.374Z] Copying: 512/512 [MB] (average 206 MBps) 00:05:54.127 00:05:54.127 00:05:54.127 real 0m6.439s 00:05:54.127 ************************************ 00:05:54.127 END TEST dd_malloc_copy 00:05:54.127 ************************************ 00:05:54.127 user 0m5.758s 00:05:54.127 sys 0m0.487s 00:05:54.127 19:39:49 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.127 19:39:49 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:05:54.127 ************************************ 00:05:54.127 END TEST spdk_dd_malloc 00:05:54.127 ************************************ 00:05:54.127 00:05:54.127 real 0m6.623s 00:05:54.127 user 0m5.869s 00:05:54.127 sys 0m0.562s 00:05:54.127 19:39:49 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.127 19:39:49 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:05:54.127 19:39:49 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:05:54.127 19:39:49 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:54.127 19:39:49 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.127 19:39:49 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:05:54.127 ************************************ 00:05:54.127 START TEST spdk_dd_bdev_to_bdev 00:05:54.127 ************************************ 00:05:54.127 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:05:54.127 * Looking for test storage... 00:05:54.127 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:54.127 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:54.127 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lcov --version 00:05:54.127 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:54.127 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:54.127 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:54.127 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:54.127 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:54.127 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.127 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:05:54.127 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:05:54.127 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:05:54.127 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:05:54.127 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:05:54.127 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:05:54.127 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:54.127 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:05:54.127 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:05:54.127 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:54.127 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.127 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:05:54.127 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:05:54.127 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.127 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:05:54.127 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:05:54.127 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:05:54.127 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:05:54.127 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.127 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:05:54.127 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:05:54.127 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:54.127 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:54.127 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:05:54.127 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.127 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:54.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.127 --rc genhtml_branch_coverage=1 00:05:54.127 --rc genhtml_function_coverage=1 00:05:54.127 --rc genhtml_legend=1 00:05:54.127 --rc geninfo_all_blocks=1 00:05:54.127 --rc geninfo_unexecuted_blocks=1 00:05:54.127 00:05:54.127 ' 00:05:54.127 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:54.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.127 --rc genhtml_branch_coverage=1 00:05:54.127 --rc genhtml_function_coverage=1 00:05:54.127 --rc genhtml_legend=1 00:05:54.127 --rc geninfo_all_blocks=1 00:05:54.127 --rc geninfo_unexecuted_blocks=1 00:05:54.127 00:05:54.127 ' 00:05:54.127 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:54.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.127 --rc genhtml_branch_coverage=1 00:05:54.127 --rc genhtml_function_coverage=1 00:05:54.127 --rc genhtml_legend=1 00:05:54.127 --rc geninfo_all_blocks=1 00:05:54.127 --rc geninfo_unexecuted_blocks=1 00:05:54.127 00:05:54.127 ' 00:05:54.127 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:54.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.127 --rc genhtml_branch_coverage=1 00:05:54.127 --rc genhtml_function_coverage=1 00:05:54.127 --rc genhtml_legend=1 00:05:54.127 --rc geninfo_all_blocks=1 00:05:54.127 --rc geninfo_unexecuted_blocks=1 00:05:54.127 00:05:54.127 ' 00:05:54.127 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:54.127 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:05:54.127 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:54.127 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:54.127 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:54.128 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.128 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.128 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.128 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:05:54.128 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.128 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:05:54.128 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:05:54.128 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:05:54.128 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:05:54.128 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:05:54.128 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:05:54.128 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:05:54.128 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:05:54.128 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:05:54.128 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:05:54.128 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:05:54.128 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:05:54.128 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:05:54.128 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:05:54.128 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:54.128 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:54.128 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:05:54.128 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:05:54.128 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:05:54.128 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:05:54.128 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.128 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:05:54.128 ************************************ 00:05:54.128 START TEST dd_inflate_file 00:05:54.128 ************************************ 00:05:54.386 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:05:54.386 [2024-11-26 19:39:49.404532] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:54.386 [2024-11-26 19:39:49.404727] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59974 ] 00:05:54.386 [2024-11-26 19:39:49.545377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.386 [2024-11-26 19:39:49.581688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.386 [2024-11-26 19:39:49.613459] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:54.644  [2024-11-26T19:39:49.891Z] Copying: 64/64 [MB] (average 2461 MBps) 00:05:54.644 00:05:54.644 00:05:54.644 real 0m0.396s 00:05:54.644 user 0m0.210s 00:05:54.644 sys 0m0.176s 00:05:54.644 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.644 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:05:54.644 ************************************ 00:05:54.644 END TEST dd_inflate_file 00:05:54.644 ************************************ 00:05:54.644 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:05:54.644 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:05:54.644 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:05:54.644 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:05:54.644 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:54.644 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.644 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:05:54.644 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:05:54.644 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:05:54.644 ************************************ 00:05:54.644 START TEST dd_copy_to_out_bdev 00:05:54.644 ************************************ 00:05:54.644 19:39:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:05:54.644 { 00:05:54.644 "subsystems": [ 00:05:54.644 { 00:05:54.644 "subsystem": "bdev", 00:05:54.644 "config": [ 00:05:54.644 { 00:05:54.644 "params": { 00:05:54.644 "trtype": "pcie", 00:05:54.644 "traddr": "0000:00:10.0", 00:05:54.644 "name": "Nvme0" 00:05:54.644 }, 00:05:54.644 "method": "bdev_nvme_attach_controller" 00:05:54.644 }, 00:05:54.644 { 00:05:54.644 "params": { 00:05:54.644 "trtype": "pcie", 00:05:54.644 "traddr": "0000:00:11.0", 00:05:54.644 "name": "Nvme1" 00:05:54.644 }, 00:05:54.644 "method": "bdev_nvme_attach_controller" 00:05:54.644 }, 00:05:54.644 { 00:05:54.644 "method": "bdev_wait_for_examine" 00:05:54.644 } 00:05:54.644 ] 00:05:54.644 } 00:05:54.644 ] 00:05:54.644 } 00:05:54.644 [2024-11-26 19:39:49.853635] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:54.644 [2024-11-26 19:39:49.853838] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60002 ] 00:05:54.901 [2024-11-26 19:39:49.990872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.901 [2024-11-26 19:39:50.028798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.901 [2024-11-26 19:39:50.061588] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:55.832  [2024-11-26T19:39:51.337Z] Copying: 64/64 [MB] (average 89 MBps) 00:05:56.090 00:05:56.090 00:05:56.090 real 0m1.301s 00:05:56.090 user 0m1.095s 00:05:56.090 sys 0m0.994s 00:05:56.090 19:39:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.090 19:39:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:05:56.090 ************************************ 00:05:56.090 END TEST dd_copy_to_out_bdev 00:05:56.090 ************************************ 00:05:56.090 19:39:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:05:56.090 19:39:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:05:56.090 19:39:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.090 19:39:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.090 19:39:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:05:56.090 ************************************ 00:05:56.090 START TEST dd_offset_magic 00:05:56.090 ************************************ 00:05:56.090 19:39:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1129 -- # offset_magic 00:05:56.090 19:39:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:05:56.090 19:39:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:05:56.090 19:39:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:05:56.090 19:39:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:05:56.090 19:39:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:05:56.090 19:39:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:05:56.090 19:39:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:05:56.090 19:39:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:05:56.090 [2024-11-26 19:39:51.207148] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:56.090 [2024-11-26 19:39:51.207230] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60037 ] 00:05:56.090 { 00:05:56.090 "subsystems": [ 00:05:56.090 { 00:05:56.090 "subsystem": "bdev", 00:05:56.090 "config": [ 00:05:56.090 { 00:05:56.090 "params": { 00:05:56.090 "trtype": "pcie", 00:05:56.090 "traddr": "0000:00:10.0", 00:05:56.090 "name": "Nvme0" 00:05:56.090 }, 00:05:56.090 "method": "bdev_nvme_attach_controller" 00:05:56.090 }, 00:05:56.090 { 00:05:56.090 "params": { 00:05:56.090 "trtype": "pcie", 00:05:56.090 "traddr": "0000:00:11.0", 00:05:56.090 "name": "Nvme1" 00:05:56.090 }, 00:05:56.090 "method": "bdev_nvme_attach_controller" 00:05:56.090 }, 00:05:56.090 { 00:05:56.090 "method": "bdev_wait_for_examine" 00:05:56.090 } 00:05:56.090 ] 00:05:56.090 } 00:05:56.090 ] 00:05:56.090 } 00:05:56.348 [2024-11-26 19:39:51.349264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.348 [2024-11-26 19:39:51.384922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.348 [2024-11-26 19:39:51.416746] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:56.606  [2024-11-26T19:39:51.853Z] Copying: 65/65 [MB] (average 942 MBps) 00:05:56.606 00:05:56.606 19:39:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:05:56.606 19:39:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:05:56.606 19:39:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:05:56.606 19:39:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:05:56.863 [2024-11-26 19:39:51.854878] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:56.863 [2024-11-26 19:39:51.854963] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60057 ] 00:05:56.863 { 00:05:56.863 "subsystems": [ 00:05:56.863 { 00:05:56.863 "subsystem": "bdev", 00:05:56.863 "config": [ 00:05:56.863 { 00:05:56.863 "params": { 00:05:56.863 "trtype": "pcie", 00:05:56.863 "traddr": "0000:00:10.0", 00:05:56.863 "name": "Nvme0" 00:05:56.863 }, 00:05:56.863 "method": "bdev_nvme_attach_controller" 00:05:56.863 }, 00:05:56.863 { 00:05:56.863 "params": { 00:05:56.863 "trtype": "pcie", 00:05:56.863 "traddr": "0000:00:11.0", 00:05:56.863 "name": "Nvme1" 00:05:56.863 }, 00:05:56.863 "method": "bdev_nvme_attach_controller" 00:05:56.863 }, 00:05:56.863 { 00:05:56.863 "method": "bdev_wait_for_examine" 00:05:56.863 } 00:05:56.863 ] 00:05:56.863 } 00:05:56.863 ] 00:05:56.863 } 00:05:56.863 [2024-11-26 19:39:51.995120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.863 [2024-11-26 19:39:52.026222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.863 [2024-11-26 19:39:52.055318] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:57.119  [2024-11-26T19:39:52.366Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:57.119 00:05:57.120 19:39:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:05:57.120 19:39:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:05:57.120 19:39:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:05:57.120 19:39:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:05:57.120 19:39:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:05:57.120 19:39:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:05:57.120 19:39:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:05:57.120 { 00:05:57.120 "subsystems": [ 00:05:57.120 { 00:05:57.120 "subsystem": "bdev", 00:05:57.120 "config": [ 00:05:57.120 { 00:05:57.120 "params": { 00:05:57.120 "trtype": "pcie", 00:05:57.120 "traddr": "0000:00:10.0", 00:05:57.120 "name": "Nvme0" 00:05:57.120 }, 00:05:57.120 "method": "bdev_nvme_attach_controller" 00:05:57.120 }, 00:05:57.120 { 00:05:57.120 "params": { 00:05:57.120 "trtype": "pcie", 00:05:57.120 "traddr": "0000:00:11.0", 00:05:57.120 "name": "Nvme1" 00:05:57.120 }, 00:05:57.120 "method": "bdev_nvme_attach_controller" 00:05:57.120 }, 00:05:57.120 { 00:05:57.120 "method": "bdev_wait_for_examine" 00:05:57.120 } 00:05:57.120 ] 00:05:57.120 } 00:05:57.120 ] 00:05:57.120 } 00:05:57.120 [2024-11-26 19:39:52.351998] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:57.120 [2024-11-26 19:39:52.352055] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60073 ] 00:05:57.377 [2024-11-26 19:39:52.485876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.377 [2024-11-26 19:39:52.519909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.377 [2024-11-26 19:39:52.551232] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:57.635  [2024-11-26T19:39:53.247Z] Copying: 65/65 [MB] (average 970 MBps) 00:05:58.000 00:05:58.000 19:39:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:05:58.000 19:39:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:05:58.000 19:39:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:05:58.000 19:39:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:05:58.000 [2024-11-26 19:39:53.033621] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:58.000 [2024-11-26 19:39:53.033684] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60088 ] 00:05:58.000 { 00:05:58.000 "subsystems": [ 00:05:58.000 { 00:05:58.000 "subsystem": "bdev", 00:05:58.000 "config": [ 00:05:58.000 { 00:05:58.000 "params": { 00:05:58.000 "trtype": "pcie", 00:05:58.000 "traddr": "0000:00:10.0", 00:05:58.000 "name": "Nvme0" 00:05:58.000 }, 00:05:58.000 "method": "bdev_nvme_attach_controller" 00:05:58.000 }, 00:05:58.000 { 00:05:58.000 "params": { 00:05:58.000 "trtype": "pcie", 00:05:58.000 "traddr": "0000:00:11.0", 00:05:58.000 "name": "Nvme1" 00:05:58.000 }, 00:05:58.000 "method": "bdev_nvme_attach_controller" 00:05:58.000 }, 00:05:58.000 { 00:05:58.000 "method": "bdev_wait_for_examine" 00:05:58.000 } 00:05:58.000 ] 00:05:58.000 } 00:05:58.000 ] 00:05:58.000 } 00:05:58.000 [2024-11-26 19:39:53.162436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.000 [2024-11-26 19:39:53.194357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.000 [2024-11-26 19:39:53.224414] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:58.258  [2024-11-26T19:39:53.505Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:58.258 00:05:58.258 19:39:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:05:58.258 19:39:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:05:58.258 00:05:58.258 real 0m2.319s 00:05:58.258 user 0m1.667s 00:05:58.258 sys 0m0.573s 00:05:58.258 19:39:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.258 19:39:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:05:58.258 ************************************ 00:05:58.258 END TEST dd_offset_magic 00:05:58.258 ************************************ 00:05:58.515 19:39:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:05:58.515 19:39:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:05:58.515 19:39:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:58.515 19:39:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:05:58.515 19:39:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:05:58.515 19:39:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:05:58.515 19:39:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:05:58.515 19:39:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:05:58.515 19:39:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:05:58.515 19:39:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:05:58.515 19:39:53 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:05:58.515 [2024-11-26 19:39:53.548200] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:58.515 [2024-11-26 19:39:53.548257] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60125 ] 00:05:58.515 { 00:05:58.515 "subsystems": [ 00:05:58.515 { 00:05:58.515 "subsystem": "bdev", 00:05:58.515 "config": [ 00:05:58.515 { 00:05:58.515 "params": { 00:05:58.515 "trtype": "pcie", 00:05:58.515 "traddr": "0000:00:10.0", 00:05:58.515 "name": "Nvme0" 00:05:58.515 }, 00:05:58.515 "method": "bdev_nvme_attach_controller" 00:05:58.515 }, 00:05:58.515 { 00:05:58.515 "params": { 00:05:58.515 "trtype": "pcie", 00:05:58.515 "traddr": "0000:00:11.0", 00:05:58.515 "name": "Nvme1" 00:05:58.515 }, 00:05:58.515 "method": "bdev_nvme_attach_controller" 00:05:58.515 }, 00:05:58.515 { 00:05:58.515 "method": "bdev_wait_for_examine" 00:05:58.515 } 00:05:58.515 ] 00:05:58.515 } 00:05:58.515 ] 00:05:58.515 } 00:05:58.515 [2024-11-26 19:39:53.686968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.515 [2024-11-26 19:39:53.722254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.515 [2024-11-26 19:39:53.753365] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:58.772  [2024-11-26T19:39:54.276Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:05:59.029 00:05:59.029 19:39:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:05:59.029 19:39:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:05:59.029 19:39:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:05:59.029 19:39:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:05:59.029 19:39:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:05:59.029 19:39:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:05:59.029 19:39:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:05:59.029 19:39:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:05:59.029 19:39:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:05:59.029 19:39:54 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:05:59.029 [2024-11-26 19:39:54.063344] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:59.029 [2024-11-26 19:39:54.063407] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60135 ] 00:05:59.029 { 00:05:59.029 "subsystems": [ 00:05:59.029 { 00:05:59.029 "subsystem": "bdev", 00:05:59.029 "config": [ 00:05:59.029 { 00:05:59.029 "params": { 00:05:59.029 "trtype": "pcie", 00:05:59.029 "traddr": "0000:00:10.0", 00:05:59.029 "name": "Nvme0" 00:05:59.029 }, 00:05:59.029 "method": "bdev_nvme_attach_controller" 00:05:59.029 }, 00:05:59.029 { 00:05:59.029 "params": { 00:05:59.029 "trtype": "pcie", 00:05:59.029 "traddr": "0000:00:11.0", 00:05:59.029 "name": "Nvme1" 00:05:59.029 }, 00:05:59.029 "method": "bdev_nvme_attach_controller" 00:05:59.029 }, 00:05:59.029 { 00:05:59.029 "method": "bdev_wait_for_examine" 00:05:59.029 } 00:05:59.029 ] 00:05:59.029 } 00:05:59.029 ] 00:05:59.029 } 00:05:59.029 [2024-11-26 19:39:54.196112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.029 [2024-11-26 19:39:54.231518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.029 [2024-11-26 19:39:54.262834] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:59.286  [2024-11-26T19:39:54.790Z] Copying: 5120/5120 [kB] (average 833 MBps) 00:05:59.543 00:05:59.543 19:39:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:05:59.543 00:05:59.543 real 0m5.337s 00:05:59.543 user 0m3.790s 00:05:59.543 sys 0m2.230s 00:05:59.543 19:39:54 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.543 19:39:54 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:05:59.543 ************************************ 00:05:59.543 END TEST spdk_dd_bdev_to_bdev 00:05:59.543 ************************************ 00:05:59.543 19:39:54 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:05:59.543 19:39:54 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:05:59.543 19:39:54 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.543 19:39:54 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.543 19:39:54 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:05:59.543 ************************************ 00:05:59.543 START TEST spdk_dd_uring 00:05:59.543 ************************************ 00:05:59.543 19:39:54 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:05:59.543 * Looking for test storage... 00:05:59.543 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:59.543 19:39:54 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:59.543 19:39:54 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lcov --version 00:05:59.543 19:39:54 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:59.543 19:39:54 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:59.543 19:39:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:59.543 19:39:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:59.543 19:39:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:59.543 19:39:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:05:59.543 19:39:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:05:59.543 19:39:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:05:59.543 19:39:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:59.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.544 --rc genhtml_branch_coverage=1 00:05:59.544 --rc genhtml_function_coverage=1 00:05:59.544 --rc genhtml_legend=1 00:05:59.544 --rc geninfo_all_blocks=1 00:05:59.544 --rc geninfo_unexecuted_blocks=1 00:05:59.544 00:05:59.544 ' 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:59.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.544 --rc genhtml_branch_coverage=1 00:05:59.544 --rc genhtml_function_coverage=1 00:05:59.544 --rc genhtml_legend=1 00:05:59.544 --rc geninfo_all_blocks=1 00:05:59.544 --rc geninfo_unexecuted_blocks=1 00:05:59.544 00:05:59.544 ' 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:59.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.544 --rc genhtml_branch_coverage=1 00:05:59.544 --rc genhtml_function_coverage=1 00:05:59.544 --rc genhtml_legend=1 00:05:59.544 --rc geninfo_all_blocks=1 00:05:59.544 --rc geninfo_unexecuted_blocks=1 00:05:59.544 00:05:59.544 ' 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:59.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.544 --rc genhtml_branch_coverage=1 00:05:59.544 --rc genhtml_function_coverage=1 00:05:59.544 --rc genhtml_legend=1 00:05:59.544 --rc geninfo_all_blocks=1 00:05:59.544 --rc geninfo_unexecuted_blocks=1 00:05:59.544 00:05:59.544 ' 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:05:59.544 ************************************ 00:05:59.544 START TEST dd_uring_copy 00:05:59.544 ************************************ 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1129 -- # uring_zram_copy 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:05:59.544 19:39:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=pzsq4h29zzutr5stskiraiewmi3yaur7xfa22f50fqapy6bjac9tnyd6m3tb7byjrv4w2ityqg9j87ki6h8n1poyn19s92susewugqj3bx9t830tz4dcr3yoy77gs82yuqmh03mvl3x0utmixajhd4sdyq638k8oqhkxie0c5l4iblgezawu62hrih784yozvqlnnkkalxvakqusnnuprvkcmfgwiztu0s0w14lr7q4nxemneygqki9xnpqn1ty9ugmidvn5rluvn64xp7c0ribstcydzxabmtruryg2z40bnj3gi9152i9b1dase8qcopm2hnbneddbg1prsiqvl2ck1yj842sbrsb8uhsb6u77tatz5nasihqwuqvt0c6prpbz3qlgwtg2a5cvjsq31znsdz7hweitm0104wu18861wnzt8cz0fd5oyk7krg5fh4d6ztnlkrcuxeiw4ytyqgj86gvdm5lsn7tz6g007c1wfe17zbbyq8x6kapndsspivn2nv9i9a4k5wcgtjoeghuk5cop85yokfxj2f0somd9iki9yxskphz4b7hpkuyfwcy78rkyy13wq8oy78ckoq2i8e9lqc7aviem3syghm3eu8wy9679wrzfqyyohb2mfrmxvxagandk7dacxnwvll3yh7gxifcql67kn0pnfjk9l6qe1x7hxhux5wqvvovdur6wgmkay4usuenjcwjoc1axfy229fy13vbg2555v20tu3j475t37exqx5nda8fvakoufry49sg0t7u4hacqujklqf2lff9qeq1tsvd59wjmd9zk00zaermj8x978pt4etsbpvsqah6e3xoy2s492an071uzishg8pqinbuinv6gtptnu2xd96ecbmi8jqrvcfii81of8xwauxg2tiwgfc0wocshpeego2jkztuhhcscfvh055srjd4lap0ze8fwy46gn3lcoy68j44a6swfearibe21s6h0f0h00f0rhzd6i5crlhvm9phww80ah9sy 00:05:59.545 19:39:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo pzsq4h29zzutr5stskiraiewmi3yaur7xfa22f50fqapy6bjac9tnyd6m3tb7byjrv4w2ityqg9j87ki6h8n1poyn19s92susewugqj3bx9t830tz4dcr3yoy77gs82yuqmh03mvl3x0utmixajhd4sdyq638k8oqhkxie0c5l4iblgezawu62hrih784yozvqlnnkkalxvakqusnnuprvkcmfgwiztu0s0w14lr7q4nxemneygqki9xnpqn1ty9ugmidvn5rluvn64xp7c0ribstcydzxabmtruryg2z40bnj3gi9152i9b1dase8qcopm2hnbneddbg1prsiqvl2ck1yj842sbrsb8uhsb6u77tatz5nasihqwuqvt0c6prpbz3qlgwtg2a5cvjsq31znsdz7hweitm0104wu18861wnzt8cz0fd5oyk7krg5fh4d6ztnlkrcuxeiw4ytyqgj86gvdm5lsn7tz6g007c1wfe17zbbyq8x6kapndsspivn2nv9i9a4k5wcgtjoeghuk5cop85yokfxj2f0somd9iki9yxskphz4b7hpkuyfwcy78rkyy13wq8oy78ckoq2i8e9lqc7aviem3syghm3eu8wy9679wrzfqyyohb2mfrmxvxagandk7dacxnwvll3yh7gxifcql67kn0pnfjk9l6qe1x7hxhux5wqvvovdur6wgmkay4usuenjcwjoc1axfy229fy13vbg2555v20tu3j475t37exqx5nda8fvakoufry49sg0t7u4hacqujklqf2lff9qeq1tsvd59wjmd9zk00zaermj8x978pt4etsbpvsqah6e3xoy2s492an071uzishg8pqinbuinv6gtptnu2xd96ecbmi8jqrvcfii81of8xwauxg2tiwgfc0wocshpeego2jkztuhhcscfvh055srjd4lap0ze8fwy46gn3lcoy68j44a6swfearibe21s6h0f0h00f0rhzd6i5crlhvm9phww80ah9sy 00:05:59.545 19:39:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:05:59.802 [2024-11-26 19:39:54.807384] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:05:59.802 [2024-11-26 19:39:54.807450] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60213 ] 00:05:59.802 [2024-11-26 19:39:54.945097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.802 [2024-11-26 19:39:54.981056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.802 [2024-11-26 19:39:55.012078] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:00.366  [2024-11-26T19:39:55.613Z] Copying: 511/511 [MB] (average 2337 MBps) 00:06:00.366 00:06:00.366 19:39:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:06:00.366 19:39:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:06:00.366 19:39:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:00.366 19:39:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:00.623 [2024-11-26 19:39:55.614470] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:06:00.623 [2024-11-26 19:39:55.614535] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60229 ] 00:06:00.623 { 00:06:00.623 "subsystems": [ 00:06:00.623 { 00:06:00.623 "subsystem": "bdev", 00:06:00.623 "config": [ 00:06:00.623 { 00:06:00.623 "params": { 00:06:00.623 "block_size": 512, 00:06:00.623 "num_blocks": 1048576, 00:06:00.623 "name": "malloc0" 00:06:00.623 }, 00:06:00.623 "method": "bdev_malloc_create" 00:06:00.623 }, 00:06:00.623 { 00:06:00.623 "params": { 00:06:00.623 "filename": "/dev/zram1", 00:06:00.623 "name": "uring0" 00:06:00.623 }, 00:06:00.623 "method": "bdev_uring_create" 00:06:00.623 }, 00:06:00.623 { 00:06:00.623 "method": "bdev_wait_for_examine" 00:06:00.623 } 00:06:00.623 ] 00:06:00.623 } 00:06:00.623 ] 00:06:00.623 } 00:06:00.623 [2024-11-26 19:39:55.753965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.623 [2024-11-26 19:39:55.790238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.623 [2024-11-26 19:39:55.821976] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:01.993  [2024-11-26T19:39:57.868Z] Copying: 268/512 [MB] (268 MBps) [2024-11-26T19:39:58.125Z] Copying: 512/512 [MB] (average 268 MBps) 00:06:02.878 00:06:02.878 19:39:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:06:02.878 19:39:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:06:02.878 19:39:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:02.878 19:39:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:02.878 [2024-11-26 19:39:58.085608] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:06:02.878 [2024-11-26 19:39:58.085686] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60262 ] 00:06:02.878 { 00:06:02.878 "subsystems": [ 00:06:02.878 { 00:06:02.878 "subsystem": "bdev", 00:06:02.878 "config": [ 00:06:02.878 { 00:06:02.878 "params": { 00:06:02.878 "block_size": 512, 00:06:02.878 "num_blocks": 1048576, 00:06:02.878 "name": "malloc0" 00:06:02.878 }, 00:06:02.878 "method": "bdev_malloc_create" 00:06:02.878 }, 00:06:02.878 { 00:06:02.878 "params": { 00:06:02.878 "filename": "/dev/zram1", 00:06:02.878 "name": "uring0" 00:06:02.878 }, 00:06:02.878 "method": "bdev_uring_create" 00:06:02.878 }, 00:06:02.878 { 00:06:02.878 "method": "bdev_wait_for_examine" 00:06:02.878 } 00:06:02.878 ] 00:06:02.878 } 00:06:02.878 ] 00:06:02.878 } 00:06:03.136 [2024-11-26 19:39:58.223191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.136 [2024-11-26 19:39:58.259102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.136 [2024-11-26 19:39:58.290646] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:04.539  [2024-11-26T19:40:00.719Z] Copying: 201/512 [MB] (201 MBps) [2024-11-26T19:40:01.283Z] Copying: 388/512 [MB] (187 MBps) [2024-11-26T19:40:01.540Z] Copying: 512/512 [MB] (average 189 MBps) 00:06:06.293 00:06:06.293 19:40:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:06:06.293 19:40:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ pzsq4h29zzutr5stskiraiewmi3yaur7xfa22f50fqapy6bjac9tnyd6m3tb7byjrv4w2ityqg9j87ki6h8n1poyn19s92susewugqj3bx9t830tz4dcr3yoy77gs82yuqmh03mvl3x0utmixajhd4sdyq638k8oqhkxie0c5l4iblgezawu62hrih784yozvqlnnkkalxvakqusnnuprvkcmfgwiztu0s0w14lr7q4nxemneygqki9xnpqn1ty9ugmidvn5rluvn64xp7c0ribstcydzxabmtruryg2z40bnj3gi9152i9b1dase8qcopm2hnbneddbg1prsiqvl2ck1yj842sbrsb8uhsb6u77tatz5nasihqwuqvt0c6prpbz3qlgwtg2a5cvjsq31znsdz7hweitm0104wu18861wnzt8cz0fd5oyk7krg5fh4d6ztnlkrcuxeiw4ytyqgj86gvdm5lsn7tz6g007c1wfe17zbbyq8x6kapndsspivn2nv9i9a4k5wcgtjoeghuk5cop85yokfxj2f0somd9iki9yxskphz4b7hpkuyfwcy78rkyy13wq8oy78ckoq2i8e9lqc7aviem3syghm3eu8wy9679wrzfqyyohb2mfrmxvxagandk7dacxnwvll3yh7gxifcql67kn0pnfjk9l6qe1x7hxhux5wqvvovdur6wgmkay4usuenjcwjoc1axfy229fy13vbg2555v20tu3j475t37exqx5nda8fvakoufry49sg0t7u4hacqujklqf2lff9qeq1tsvd59wjmd9zk00zaermj8x978pt4etsbpvsqah6e3xoy2s492an071uzishg8pqinbuinv6gtptnu2xd96ecbmi8jqrvcfii81of8xwauxg2tiwgfc0wocshpeego2jkztuhhcscfvh055srjd4lap0ze8fwy46gn3lcoy68j44a6swfearibe21s6h0f0h00f0rhzd6i5crlhvm9phww80ah9sy == \p\z\s\q\4\h\2\9\z\z\u\t\r\5\s\t\s\k\i\r\a\i\e\w\m\i\3\y\a\u\r\7\x\f\a\2\2\f\5\0\f\q\a\p\y\6\b\j\a\c\9\t\n\y\d\6\m\3\t\b\7\b\y\j\r\v\4\w\2\i\t\y\q\g\9\j\8\7\k\i\6\h\8\n\1\p\o\y\n\1\9\s\9\2\s\u\s\e\w\u\g\q\j\3\b\x\9\t\8\3\0\t\z\4\d\c\r\3\y\o\y\7\7\g\s\8\2\y\u\q\m\h\0\3\m\v\l\3\x\0\u\t\m\i\x\a\j\h\d\4\s\d\y\q\6\3\8\k\8\o\q\h\k\x\i\e\0\c\5\l\4\i\b\l\g\e\z\a\w\u\6\2\h\r\i\h\7\8\4\y\o\z\v\q\l\n\n\k\k\a\l\x\v\a\k\q\u\s\n\n\u\p\r\v\k\c\m\f\g\w\i\z\t\u\0\s\0\w\1\4\l\r\7\q\4\n\x\e\m\n\e\y\g\q\k\i\9\x\n\p\q\n\1\t\y\9\u\g\m\i\d\v\n\5\r\l\u\v\n\6\4\x\p\7\c\0\r\i\b\s\t\c\y\d\z\x\a\b\m\t\r\u\r\y\g\2\z\4\0\b\n\j\3\g\i\9\1\5\2\i\9\b\1\d\a\s\e\8\q\c\o\p\m\2\h\n\b\n\e\d\d\b\g\1\p\r\s\i\q\v\l\2\c\k\1\y\j\8\4\2\s\b\r\s\b\8\u\h\s\b\6\u\7\7\t\a\t\z\5\n\a\s\i\h\q\w\u\q\v\t\0\c\6\p\r\p\b\z\3\q\l\g\w\t\g\2\a\5\c\v\j\s\q\3\1\z\n\s\d\z\7\h\w\e\i\t\m\0\1\0\4\w\u\1\8\8\6\1\w\n\z\t\8\c\z\0\f\d\5\o\y\k\7\k\r\g\5\f\h\4\d\6\z\t\n\l\k\r\c\u\x\e\i\w\4\y\t\y\q\g\j\8\6\g\v\d\m\5\l\s\n\7\t\z\6\g\0\0\7\c\1\w\f\e\1\7\z\b\b\y\q\8\x\6\k\a\p\n\d\s\s\p\i\v\n\2\n\v\9\i\9\a\4\k\5\w\c\g\t\j\o\e\g\h\u\k\5\c\o\p\8\5\y\o\k\f\x\j\2\f\0\s\o\m\d\9\i\k\i\9\y\x\s\k\p\h\z\4\b\7\h\p\k\u\y\f\w\c\y\7\8\r\k\y\y\1\3\w\q\8\o\y\7\8\c\k\o\q\2\i\8\e\9\l\q\c\7\a\v\i\e\m\3\s\y\g\h\m\3\e\u\8\w\y\9\6\7\9\w\r\z\f\q\y\y\o\h\b\2\m\f\r\m\x\v\x\a\g\a\n\d\k\7\d\a\c\x\n\w\v\l\l\3\y\h\7\g\x\i\f\c\q\l\6\7\k\n\0\p\n\f\j\k\9\l\6\q\e\1\x\7\h\x\h\u\x\5\w\q\v\v\o\v\d\u\r\6\w\g\m\k\a\y\4\u\s\u\e\n\j\c\w\j\o\c\1\a\x\f\y\2\2\9\f\y\1\3\v\b\g\2\5\5\5\v\2\0\t\u\3\j\4\7\5\t\3\7\e\x\q\x\5\n\d\a\8\f\v\a\k\o\u\f\r\y\4\9\s\g\0\t\7\u\4\h\a\c\q\u\j\k\l\q\f\2\l\f\f\9\q\e\q\1\t\s\v\d\5\9\w\j\m\d\9\z\k\0\0\z\a\e\r\m\j\8\x\9\7\8\p\t\4\e\t\s\b\p\v\s\q\a\h\6\e\3\x\o\y\2\s\4\9\2\a\n\0\7\1\u\z\i\s\h\g\8\p\q\i\n\b\u\i\n\v\6\g\t\p\t\n\u\2\x\d\9\6\e\c\b\m\i\8\j\q\r\v\c\f\i\i\8\1\o\f\8\x\w\a\u\x\g\2\t\i\w\g\f\c\0\w\o\c\s\h\p\e\e\g\o\2\j\k\z\t\u\h\h\c\s\c\f\v\h\0\5\5\s\r\j\d\4\l\a\p\0\z\e\8\f\w\y\4\6\g\n\3\l\c\o\y\6\8\j\4\4\a\6\s\w\f\e\a\r\i\b\e\2\1\s\6\h\0\f\0\h\0\0\f\0\r\h\z\d\6\i\5\c\r\l\h\v\m\9\p\h\w\w\8\0\a\h\9\s\y ]] 00:06:06.293 19:40:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:06:06.293 19:40:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ pzsq4h29zzutr5stskiraiewmi3yaur7xfa22f50fqapy6bjac9tnyd6m3tb7byjrv4w2ityqg9j87ki6h8n1poyn19s92susewugqj3bx9t830tz4dcr3yoy77gs82yuqmh03mvl3x0utmixajhd4sdyq638k8oqhkxie0c5l4iblgezawu62hrih784yozvqlnnkkalxvakqusnnuprvkcmfgwiztu0s0w14lr7q4nxemneygqki9xnpqn1ty9ugmidvn5rluvn64xp7c0ribstcydzxabmtruryg2z40bnj3gi9152i9b1dase8qcopm2hnbneddbg1prsiqvl2ck1yj842sbrsb8uhsb6u77tatz5nasihqwuqvt0c6prpbz3qlgwtg2a5cvjsq31znsdz7hweitm0104wu18861wnzt8cz0fd5oyk7krg5fh4d6ztnlkrcuxeiw4ytyqgj86gvdm5lsn7tz6g007c1wfe17zbbyq8x6kapndsspivn2nv9i9a4k5wcgtjoeghuk5cop85yokfxj2f0somd9iki9yxskphz4b7hpkuyfwcy78rkyy13wq8oy78ckoq2i8e9lqc7aviem3syghm3eu8wy9679wrzfqyyohb2mfrmxvxagandk7dacxnwvll3yh7gxifcql67kn0pnfjk9l6qe1x7hxhux5wqvvovdur6wgmkay4usuenjcwjoc1axfy229fy13vbg2555v20tu3j475t37exqx5nda8fvakoufry49sg0t7u4hacqujklqf2lff9qeq1tsvd59wjmd9zk00zaermj8x978pt4etsbpvsqah6e3xoy2s492an071uzishg8pqinbuinv6gtptnu2xd96ecbmi8jqrvcfii81of8xwauxg2tiwgfc0wocshpeego2jkztuhhcscfvh055srjd4lap0ze8fwy46gn3lcoy68j44a6swfearibe21s6h0f0h00f0rhzd6i5crlhvm9phww80ah9sy == \p\z\s\q\4\h\2\9\z\z\u\t\r\5\s\t\s\k\i\r\a\i\e\w\m\i\3\y\a\u\r\7\x\f\a\2\2\f\5\0\f\q\a\p\y\6\b\j\a\c\9\t\n\y\d\6\m\3\t\b\7\b\y\j\r\v\4\w\2\i\t\y\q\g\9\j\8\7\k\i\6\h\8\n\1\p\o\y\n\1\9\s\9\2\s\u\s\e\w\u\g\q\j\3\b\x\9\t\8\3\0\t\z\4\d\c\r\3\y\o\y\7\7\g\s\8\2\y\u\q\m\h\0\3\m\v\l\3\x\0\u\t\m\i\x\a\j\h\d\4\s\d\y\q\6\3\8\k\8\o\q\h\k\x\i\e\0\c\5\l\4\i\b\l\g\e\z\a\w\u\6\2\h\r\i\h\7\8\4\y\o\z\v\q\l\n\n\k\k\a\l\x\v\a\k\q\u\s\n\n\u\p\r\v\k\c\m\f\g\w\i\z\t\u\0\s\0\w\1\4\l\r\7\q\4\n\x\e\m\n\e\y\g\q\k\i\9\x\n\p\q\n\1\t\y\9\u\g\m\i\d\v\n\5\r\l\u\v\n\6\4\x\p\7\c\0\r\i\b\s\t\c\y\d\z\x\a\b\m\t\r\u\r\y\g\2\z\4\0\b\n\j\3\g\i\9\1\5\2\i\9\b\1\d\a\s\e\8\q\c\o\p\m\2\h\n\b\n\e\d\d\b\g\1\p\r\s\i\q\v\l\2\c\k\1\y\j\8\4\2\s\b\r\s\b\8\u\h\s\b\6\u\7\7\t\a\t\z\5\n\a\s\i\h\q\w\u\q\v\t\0\c\6\p\r\p\b\z\3\q\l\g\w\t\g\2\a\5\c\v\j\s\q\3\1\z\n\s\d\z\7\h\w\e\i\t\m\0\1\0\4\w\u\1\8\8\6\1\w\n\z\t\8\c\z\0\f\d\5\o\y\k\7\k\r\g\5\f\h\4\d\6\z\t\n\l\k\r\c\u\x\e\i\w\4\y\t\y\q\g\j\8\6\g\v\d\m\5\l\s\n\7\t\z\6\g\0\0\7\c\1\w\f\e\1\7\z\b\b\y\q\8\x\6\k\a\p\n\d\s\s\p\i\v\n\2\n\v\9\i\9\a\4\k\5\w\c\g\t\j\o\e\g\h\u\k\5\c\o\p\8\5\y\o\k\f\x\j\2\f\0\s\o\m\d\9\i\k\i\9\y\x\s\k\p\h\z\4\b\7\h\p\k\u\y\f\w\c\y\7\8\r\k\y\y\1\3\w\q\8\o\y\7\8\c\k\o\q\2\i\8\e\9\l\q\c\7\a\v\i\e\m\3\s\y\g\h\m\3\e\u\8\w\y\9\6\7\9\w\r\z\f\q\y\y\o\h\b\2\m\f\r\m\x\v\x\a\g\a\n\d\k\7\d\a\c\x\n\w\v\l\l\3\y\h\7\g\x\i\f\c\q\l\6\7\k\n\0\p\n\f\j\k\9\l\6\q\e\1\x\7\h\x\h\u\x\5\w\q\v\v\o\v\d\u\r\6\w\g\m\k\a\y\4\u\s\u\e\n\j\c\w\j\o\c\1\a\x\f\y\2\2\9\f\y\1\3\v\b\g\2\5\5\5\v\2\0\t\u\3\j\4\7\5\t\3\7\e\x\q\x\5\n\d\a\8\f\v\a\k\o\u\f\r\y\4\9\s\g\0\t\7\u\4\h\a\c\q\u\j\k\l\q\f\2\l\f\f\9\q\e\q\1\t\s\v\d\5\9\w\j\m\d\9\z\k\0\0\z\a\e\r\m\j\8\x\9\7\8\p\t\4\e\t\s\b\p\v\s\q\a\h\6\e\3\x\o\y\2\s\4\9\2\a\n\0\7\1\u\z\i\s\h\g\8\p\q\i\n\b\u\i\n\v\6\g\t\p\t\n\u\2\x\d\9\6\e\c\b\m\i\8\j\q\r\v\c\f\i\i\8\1\o\f\8\x\w\a\u\x\g\2\t\i\w\g\f\c\0\w\o\c\s\h\p\e\e\g\o\2\j\k\z\t\u\h\h\c\s\c\f\v\h\0\5\5\s\r\j\d\4\l\a\p\0\z\e\8\f\w\y\4\6\g\n\3\l\c\o\y\6\8\j\4\4\a\6\s\w\f\e\a\r\i\b\e\2\1\s\6\h\0\f\0\h\0\0\f\0\r\h\z\d\6\i\5\c\r\l\h\v\m\9\p\h\w\w\8\0\a\h\9\s\y ]] 00:06:06.293 19:40:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:06.551 19:40:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:06:06.551 19:40:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:06:06.551 19:40:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:06.551 19:40:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:06.551 [2024-11-26 19:40:01.592678] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:06:06.551 [2024-11-26 19:40:01.592739] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60322 ] 00:06:06.551 { 00:06:06.551 "subsystems": [ 00:06:06.551 { 00:06:06.551 "subsystem": "bdev", 00:06:06.551 "config": [ 00:06:06.551 { 00:06:06.551 "params": { 00:06:06.551 "block_size": 512, 00:06:06.551 "num_blocks": 1048576, 00:06:06.551 "name": "malloc0" 00:06:06.551 }, 00:06:06.551 "method": "bdev_malloc_create" 00:06:06.551 }, 00:06:06.551 { 00:06:06.551 "params": { 00:06:06.551 "filename": "/dev/zram1", 00:06:06.551 "name": "uring0" 00:06:06.551 }, 00:06:06.551 "method": "bdev_uring_create" 00:06:06.551 }, 00:06:06.551 { 00:06:06.551 "method": "bdev_wait_for_examine" 00:06:06.551 } 00:06:06.551 ] 00:06:06.551 } 00:06:06.551 ] 00:06:06.551 } 00:06:06.551 [2024-11-26 19:40:01.728734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.551 [2024-11-26 19:40:01.762130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.551 [2024-11-26 19:40:01.793140] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:07.924  [2024-11-26T19:40:04.131Z] Copying: 225/512 [MB] (225 MBps) [2024-11-26T19:40:04.389Z] Copying: 450/512 [MB] (225 MBps) [2024-11-26T19:40:04.389Z] Copying: 512/512 [MB] (average 226 MBps) 00:06:09.142 00:06:09.142 19:40:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:06:09.142 19:40:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:06:09.142 19:40:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:06:09.142 19:40:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:06:09.142 19:40:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:06:09.142 19:40:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:09.142 19:40:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:09.142 19:40:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:06:09.399 [2024-11-26 19:40:04.405119] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:06:09.399 [2024-11-26 19:40:04.405186] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60361 ] 00:06:09.399 { 00:06:09.399 "subsystems": [ 00:06:09.399 { 00:06:09.399 "subsystem": "bdev", 00:06:09.399 "config": [ 00:06:09.399 { 00:06:09.399 "params": { 00:06:09.399 "block_size": 512, 00:06:09.399 "num_blocks": 1048576, 00:06:09.399 "name": "malloc0" 00:06:09.399 }, 00:06:09.399 "method": "bdev_malloc_create" 00:06:09.399 }, 00:06:09.399 { 00:06:09.399 "params": { 00:06:09.399 "filename": "/dev/zram1", 00:06:09.399 "name": "uring0" 00:06:09.399 }, 00:06:09.399 "method": "bdev_uring_create" 00:06:09.399 }, 00:06:09.399 { 00:06:09.399 "params": { 00:06:09.399 "name": "uring0" 00:06:09.399 }, 00:06:09.399 "method": "bdev_uring_delete" 00:06:09.399 }, 00:06:09.399 { 00:06:09.399 "method": "bdev_wait_for_examine" 00:06:09.399 } 00:06:09.399 ] 00:06:09.399 } 00:06:09.399 ] 00:06:09.399 } 00:06:09.399 [2024-11-26 19:40:04.545419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.400 [2024-11-26 19:40:04.584071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.400 [2024-11-26 19:40:04.617106] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:09.657  [2024-11-26T19:40:05.162Z] Copying: 0/0 [B] (average 0 Bps) 00:06:09.915 00:06:09.915 19:40:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:09.915 19:40:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # local es=0 00:06:09.915 19:40:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:09.915 19:40:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:06:09.915 19:40:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:09.915 19:40:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:06:09.915 19:40:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:09.915 19:40:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:09.915 19:40:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:09.915 19:40:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:09.915 19:40:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:09.915 19:40:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:09.915 19:40:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:09.915 19:40:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:09.915 19:40:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:09.915 19:40:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:09.915 [2024-11-26 19:40:04.999499] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:06:09.915 [2024-11-26 19:40:04.999566] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60389 ] 00:06:09.915 { 00:06:09.915 "subsystems": [ 00:06:09.915 { 00:06:09.915 "subsystem": "bdev", 00:06:09.915 "config": [ 00:06:09.915 { 00:06:09.915 "params": { 00:06:09.915 "block_size": 512, 00:06:09.915 "num_blocks": 1048576, 00:06:09.915 "name": "malloc0" 00:06:09.915 }, 00:06:09.915 "method": "bdev_malloc_create" 00:06:09.915 }, 00:06:09.915 { 00:06:09.915 "params": { 00:06:09.915 "filename": "/dev/zram1", 00:06:09.915 "name": "uring0" 00:06:09.915 }, 00:06:09.915 "method": "bdev_uring_create" 00:06:09.915 }, 00:06:09.915 { 00:06:09.915 "params": { 00:06:09.915 "name": "uring0" 00:06:09.915 }, 00:06:09.915 "method": "bdev_uring_delete" 00:06:09.915 }, 00:06:09.915 { 00:06:09.915 "method": "bdev_wait_for_examine" 00:06:09.915 } 00:06:09.915 ] 00:06:09.915 } 00:06:09.915 ] 00:06:09.915 } 00:06:09.916 [2024-11-26 19:40:05.139821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.174 [2024-11-26 19:40:05.176857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.174 [2024-11-26 19:40:05.209474] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:10.174 [2024-11-26 19:40:05.351508] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:06:10.174 [2024-11-26 19:40:05.351551] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:06:10.174 [2024-11-26 19:40:05.351558] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:06:10.174 [2024-11-26 19:40:05.351564] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:10.432 [2024-11-26 19:40:05.510717] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:10.432 19:40:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # es=237 00:06:10.432 19:40:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:10.432 19:40:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@664 -- # es=109 00:06:10.432 19:40:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@665 -- # case "$es" in 00:06:10.432 19:40:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@672 -- # es=1 00:06:10.432 19:40:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:10.432 19:40:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:06:10.432 19:40:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:06:10.432 19:40:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:06:10.432 19:40:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:06:10.432 19:40:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:06:10.432 19:40:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:10.690 00:06:10.690 real 0m11.058s 00:06:10.690 user 0m7.693s 00:06:10.690 sys 0m9.411s 00:06:10.690 19:40:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.690 19:40:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:10.690 ************************************ 00:06:10.690 END TEST dd_uring_copy 00:06:10.690 ************************************ 00:06:10.690 00:06:10.690 real 0m11.247s 00:06:10.690 user 0m7.799s 00:06:10.690 sys 0m9.504s 00:06:10.690 19:40:05 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.690 19:40:05 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:06:10.690 ************************************ 00:06:10.690 END TEST spdk_dd_uring 00:06:10.690 ************************************ 00:06:10.690 19:40:05 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:06:10.690 19:40:05 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.690 19:40:05 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.690 19:40:05 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:10.690 ************************************ 00:06:10.690 START TEST spdk_dd_sparse 00:06:10.690 ************************************ 00:06:10.690 19:40:05 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:06:10.690 * Looking for test storage... 00:06:10.948 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:10.948 19:40:05 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:10.948 19:40:05 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lcov --version 00:06:10.948 19:40:05 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:10.948 19:40:06 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:10.948 19:40:06 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.948 19:40:06 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.948 19:40:06 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.948 19:40:06 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.948 19:40:06 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.948 19:40:06 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.948 19:40:06 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.948 19:40:06 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.948 19:40:06 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.948 19:40:06 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.948 19:40:06 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.948 19:40:06 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:06:10.948 19:40:06 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:06:10.948 19:40:06 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.948 19:40:06 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.948 19:40:06 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:06:10.949 19:40:06 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:06:10.949 19:40:06 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.949 19:40:06 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:06:10.949 19:40:06 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.949 19:40:06 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:06:10.949 19:40:06 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:06:10.949 19:40:06 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.949 19:40:06 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:06:10.949 19:40:06 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.949 19:40:06 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.949 19:40:06 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.949 19:40:06 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:06:10.949 19:40:06 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.949 19:40:06 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:10.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.949 --rc genhtml_branch_coverage=1 00:06:10.949 --rc genhtml_function_coverage=1 00:06:10.949 --rc genhtml_legend=1 00:06:10.949 --rc geninfo_all_blocks=1 00:06:10.949 --rc geninfo_unexecuted_blocks=1 00:06:10.949 00:06:10.949 ' 00:06:10.949 19:40:06 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:10.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.949 --rc genhtml_branch_coverage=1 00:06:10.949 --rc genhtml_function_coverage=1 00:06:10.949 --rc genhtml_legend=1 00:06:10.949 --rc geninfo_all_blocks=1 00:06:10.949 --rc geninfo_unexecuted_blocks=1 00:06:10.949 00:06:10.949 ' 00:06:10.949 19:40:06 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:10.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.949 --rc genhtml_branch_coverage=1 00:06:10.949 --rc genhtml_function_coverage=1 00:06:10.949 --rc genhtml_legend=1 00:06:10.949 --rc geninfo_all_blocks=1 00:06:10.949 --rc geninfo_unexecuted_blocks=1 00:06:10.949 00:06:10.949 ' 00:06:10.949 19:40:06 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:10.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.949 --rc genhtml_branch_coverage=1 00:06:10.949 --rc genhtml_function_coverage=1 00:06:10.949 --rc genhtml_legend=1 00:06:10.949 --rc geninfo_all_blocks=1 00:06:10.949 --rc geninfo_unexecuted_blocks=1 00:06:10.949 00:06:10.949 ' 00:06:10.949 19:40:06 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:10.949 19:40:06 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:06:10.949 19:40:06 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:10.949 19:40:06 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:10.949 19:40:06 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:10.949 19:40:06 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.949 19:40:06 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.949 19:40:06 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.949 19:40:06 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:06:10.949 19:40:06 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.949 19:40:06 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:06:10.949 19:40:06 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:06:10.949 19:40:06 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:06:10.949 19:40:06 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:06:10.949 19:40:06 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:06:10.949 19:40:06 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:06:10.949 19:40:06 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:06:10.949 19:40:06 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:06:10.949 19:40:06 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:06:10.949 19:40:06 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:06:10.949 19:40:06 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:06:10.949 1+0 records in 00:06:10.949 1+0 records out 00:06:10.949 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00481738 s, 871 MB/s 00:06:10.949 19:40:06 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:06:10.949 1+0 records in 00:06:10.949 1+0 records out 00:06:10.949 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00500041 s, 839 MB/s 00:06:10.949 19:40:06 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:06:10.949 1+0 records in 00:06:10.949 1+0 records out 00:06:10.949 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00342977 s, 1.2 GB/s 00:06:10.949 19:40:06 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:06:10.949 19:40:06 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.949 19:40:06 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.949 19:40:06 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:10.949 ************************************ 00:06:10.949 START TEST dd_sparse_file_to_file 00:06:10.949 ************************************ 00:06:10.949 19:40:06 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1129 -- # file_to_file 00:06:10.949 19:40:06 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:06:10.949 19:40:06 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:06:10.949 19:40:06 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:10.949 19:40:06 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:06:10.949 19:40:06 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:06:10.949 19:40:06 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:06:10.949 19:40:06 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:06:10.949 19:40:06 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:06:10.949 19:40:06 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:06:10.949 19:40:06 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:10.949 [2024-11-26 19:40:06.088658] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:06:10.949 [2024-11-26 19:40:06.088724] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60484 ] 00:06:10.949 { 00:06:10.949 "subsystems": [ 00:06:10.949 { 00:06:10.949 "subsystem": "bdev", 00:06:10.949 "config": [ 00:06:10.949 { 00:06:10.949 "params": { 00:06:10.949 "block_size": 4096, 00:06:10.949 "filename": "dd_sparse_aio_disk", 00:06:10.949 "name": "dd_aio" 00:06:10.949 }, 00:06:10.949 "method": "bdev_aio_create" 00:06:10.949 }, 00:06:10.949 { 00:06:10.949 "params": { 00:06:10.949 "lvs_name": "dd_lvstore", 00:06:10.949 "bdev_name": "dd_aio" 00:06:10.949 }, 00:06:10.949 "method": "bdev_lvol_create_lvstore" 00:06:10.949 }, 00:06:10.949 { 00:06:10.949 "method": "bdev_wait_for_examine" 00:06:10.949 } 00:06:10.949 ] 00:06:10.949 } 00:06:10.949 ] 00:06:10.949 } 00:06:11.209 [2024-11-26 19:40:06.223754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.209 [2024-11-26 19:40:06.256149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.209 [2024-11-26 19:40:06.285200] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:11.209  [2024-11-26T19:40:06.726Z] Copying: 12/36 [MB] (average 1714 MBps) 00:06:11.479 00:06:11.479 19:40:06 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:06:11.479 19:40:06 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:06:11.479 19:40:06 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:06:11.479 19:40:06 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:06:11.479 19:40:06 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:06:11.479 19:40:06 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:06:11.479 19:40:06 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:06:11.479 19:40:06 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:06:11.479 19:40:06 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:06:11.479 19:40:06 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:06:11.479 00:06:11.479 real 0m0.456s 00:06:11.479 user 0m0.255s 00:06:11.479 sys 0m0.208s 00:06:11.479 19:40:06 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.479 19:40:06 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:11.479 ************************************ 00:06:11.479 END TEST dd_sparse_file_to_file 00:06:11.479 ************************************ 00:06:11.479 19:40:06 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:06:11.479 19:40:06 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.479 19:40:06 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.479 19:40:06 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:11.479 ************************************ 00:06:11.479 START TEST dd_sparse_file_to_bdev 00:06:11.479 ************************************ 00:06:11.479 19:40:06 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1129 -- # file_to_bdev 00:06:11.479 19:40:06 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:11.479 19:40:06 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:06:11.479 19:40:06 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:06:11.479 19:40:06 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:06:11.479 19:40:06 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:06:11.479 19:40:06 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:06:11.479 19:40:06 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:11.479 19:40:06 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:11.479 [2024-11-26 19:40:06.575082] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:06:11.479 [2024-11-26 19:40:06.575146] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60521 ] 00:06:11.479 { 00:06:11.479 "subsystems": [ 00:06:11.479 { 00:06:11.479 "subsystem": "bdev", 00:06:11.479 "config": [ 00:06:11.479 { 00:06:11.479 "params": { 00:06:11.479 "block_size": 4096, 00:06:11.479 "filename": "dd_sparse_aio_disk", 00:06:11.479 "name": "dd_aio" 00:06:11.479 }, 00:06:11.479 "method": "bdev_aio_create" 00:06:11.479 }, 00:06:11.479 { 00:06:11.479 "params": { 00:06:11.479 "lvs_name": "dd_lvstore", 00:06:11.479 "lvol_name": "dd_lvol", 00:06:11.479 "size_in_mib": 36, 00:06:11.479 "thin_provision": true 00:06:11.479 }, 00:06:11.479 "method": "bdev_lvol_create" 00:06:11.479 }, 00:06:11.479 { 00:06:11.479 "method": "bdev_wait_for_examine" 00:06:11.479 } 00:06:11.479 ] 00:06:11.479 } 00:06:11.479 ] 00:06:11.479 } 00:06:11.479 [2024-11-26 19:40:06.711747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.769 [2024-11-26 19:40:06.744534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.769 [2024-11-26 19:40:06.774661] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:11.769  [2024-11-26T19:40:07.016Z] Copying: 12/36 [MB] (average 571 MBps) 00:06:11.769 00:06:11.769 00:06:11.769 real 0m0.422s 00:06:11.769 user 0m0.249s 00:06:11.769 sys 0m0.199s 00:06:11.769 19:40:06 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.769 ************************************ 00:06:11.769 19:40:06 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:11.769 END TEST dd_sparse_file_to_bdev 00:06:11.769 ************************************ 00:06:11.769 19:40:06 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:06:11.769 19:40:06 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.769 19:40:06 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.769 19:40:06 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:11.769 ************************************ 00:06:11.769 START TEST dd_sparse_bdev_to_file 00:06:11.769 ************************************ 00:06:11.769 19:40:07 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1129 -- # bdev_to_file 00:06:11.769 19:40:07 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:06:11.769 19:40:07 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:06:11.769 19:40:07 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:11.769 19:40:07 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:06:11.769 19:40:07 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:06:11.769 19:40:07 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:06:11.769 19:40:07 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:06:11.769 19:40:07 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:12.027 [2024-11-26 19:40:07.039301] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:06:12.027 [2024-11-26 19:40:07.039362] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60559 ] 00:06:12.027 { 00:06:12.027 "subsystems": [ 00:06:12.027 { 00:06:12.027 "subsystem": "bdev", 00:06:12.027 "config": [ 00:06:12.027 { 00:06:12.027 "params": { 00:06:12.027 "block_size": 4096, 00:06:12.027 "filename": "dd_sparse_aio_disk", 00:06:12.027 "name": "dd_aio" 00:06:12.027 }, 00:06:12.027 "method": "bdev_aio_create" 00:06:12.027 }, 00:06:12.027 { 00:06:12.027 "method": "bdev_wait_for_examine" 00:06:12.027 } 00:06:12.027 ] 00:06:12.027 } 00:06:12.027 ] 00:06:12.027 } 00:06:12.027 [2024-11-26 19:40:07.180131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.027 [2024-11-26 19:40:07.217723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.027 [2024-11-26 19:40:07.249244] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:12.284  [2024-11-26T19:40:07.531Z] Copying: 12/36 [MB] (average 800 MBps) 00:06:12.284 00:06:12.284 19:40:07 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:06:12.284 19:40:07 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:06:12.284 19:40:07 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:06:12.284 19:40:07 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:06:12.284 19:40:07 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:06:12.284 19:40:07 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:06:12.284 19:40:07 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:06:12.284 19:40:07 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:06:12.284 19:40:07 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:06:12.284 19:40:07 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:06:12.284 00:06:12.284 real 0m0.467s 00:06:12.284 user 0m0.261s 00:06:12.284 sys 0m0.233s 00:06:12.284 19:40:07 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.284 ************************************ 00:06:12.284 END TEST dd_sparse_bdev_to_file 00:06:12.284 ************************************ 00:06:12.284 19:40:07 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:12.284 19:40:07 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:06:12.284 19:40:07 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:06:12.284 19:40:07 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:06:12.284 19:40:07 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:06:12.284 19:40:07 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:06:12.284 00:06:12.284 real 0m1.645s 00:06:12.284 user 0m0.892s 00:06:12.284 sys 0m0.813s 00:06:12.284 19:40:07 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.284 19:40:07 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:12.284 ************************************ 00:06:12.284 END TEST spdk_dd_sparse 00:06:12.284 ************************************ 00:06:12.542 19:40:07 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:06:12.542 19:40:07 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.542 19:40:07 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.542 19:40:07 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:12.542 ************************************ 00:06:12.542 START TEST spdk_dd_negative 00:06:12.542 ************************************ 00:06:12.542 19:40:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:06:12.542 * Looking for test storage... 00:06:12.542 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:12.542 19:40:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:12.542 19:40:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lcov --version 00:06:12.542 19:40:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:12.542 19:40:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:12.542 19:40:07 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.542 19:40:07 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.542 19:40:07 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.542 19:40:07 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.542 19:40:07 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.542 19:40:07 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.542 19:40:07 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.542 19:40:07 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.542 19:40:07 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.542 19:40:07 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.542 19:40:07 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.542 19:40:07 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:06:12.542 19:40:07 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:06:12.542 19:40:07 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.542 19:40:07 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.542 19:40:07 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:06:12.542 19:40:07 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:06:12.542 19:40:07 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.542 19:40:07 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:06:12.542 19:40:07 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.542 19:40:07 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:06:12.543 19:40:07 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:06:12.543 19:40:07 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.543 19:40:07 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:06:12.543 19:40:07 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.543 19:40:07 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.543 19:40:07 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.543 19:40:07 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:06:12.543 19:40:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.543 19:40:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:12.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.543 --rc genhtml_branch_coverage=1 00:06:12.543 --rc genhtml_function_coverage=1 00:06:12.543 --rc genhtml_legend=1 00:06:12.543 --rc geninfo_all_blocks=1 00:06:12.543 --rc geninfo_unexecuted_blocks=1 00:06:12.543 00:06:12.543 ' 00:06:12.543 19:40:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:12.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.543 --rc genhtml_branch_coverage=1 00:06:12.543 --rc genhtml_function_coverage=1 00:06:12.543 --rc genhtml_legend=1 00:06:12.543 --rc geninfo_all_blocks=1 00:06:12.543 --rc geninfo_unexecuted_blocks=1 00:06:12.543 00:06:12.543 ' 00:06:12.543 19:40:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:12.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.543 --rc genhtml_branch_coverage=1 00:06:12.543 --rc genhtml_function_coverage=1 00:06:12.543 --rc genhtml_legend=1 00:06:12.543 --rc geninfo_all_blocks=1 00:06:12.543 --rc geninfo_unexecuted_blocks=1 00:06:12.543 00:06:12.543 ' 00:06:12.543 19:40:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:12.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.543 --rc genhtml_branch_coverage=1 00:06:12.543 --rc genhtml_function_coverage=1 00:06:12.543 --rc genhtml_legend=1 00:06:12.543 --rc geninfo_all_blocks=1 00:06:12.543 --rc geninfo_unexecuted_blocks=1 00:06:12.543 00:06:12.543 ' 00:06:12.543 19:40:07 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:12.543 19:40:07 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:06:12.543 19:40:07 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:12.543 19:40:07 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:12.543 19:40:07 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:12.543 19:40:07 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.543 19:40:07 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.543 19:40:07 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.543 19:40:07 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:06:12.543 19:40:07 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.543 19:40:07 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:12.543 19:40:07 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:12.543 19:40:07 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:12.543 19:40:07 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:12.543 19:40:07 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:06:12.543 19:40:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.543 19:40:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.543 19:40:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:12.543 ************************************ 00:06:12.543 START TEST dd_invalid_arguments 00:06:12.543 ************************************ 00:06:12.543 19:40:07 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1129 -- # invalid_arguments 00:06:12.543 19:40:07 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:06:12.543 19:40:07 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # local es=0 00:06:12.543 19:40:07 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:06:12.543 19:40:07 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:12.543 19:40:07 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:12.543 19:40:07 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:12.543 19:40:07 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:12.543 19:40:07 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:12.543 19:40:07 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:12.543 19:40:07 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:12.543 19:40:07 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:12.543 19:40:07 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:06:12.543 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:06:12.543 00:06:12.543 CPU options: 00:06:12.543 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:06:12.543 (like [0,1,10]) 00:06:12.543 --lcores lcore to CPU mapping list. The list is in the format: 00:06:12.543 [<,lcores[@CPUs]>...] 00:06:12.543 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:06:12.543 Within the group, '-' is used for range separator, 00:06:12.543 ',' is used for single number separator. 00:06:12.543 '( )' can be omitted for single element group, 00:06:12.543 '@' can be omitted if cpus and lcores have the same value 00:06:12.543 --disable-cpumask-locks Disable CPU core lock files. 00:06:12.543 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:06:12.543 pollers in the app support interrupt mode) 00:06:12.543 -p, --main-core main (primary) core for DPDK 00:06:12.543 00:06:12.543 Configuration options: 00:06:12.543 -c, --config, --json JSON config file 00:06:12.543 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:06:12.543 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:06:12.543 --wait-for-rpc wait for RPCs to initialize subsystems 00:06:12.543 --rpcs-allowed comma-separated list of permitted RPCS 00:06:12.543 --json-ignore-init-errors don't exit on invalid config entry 00:06:12.543 00:06:12.543 Memory options: 00:06:12.543 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:06:12.543 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:06:12.543 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:06:12.543 -R, --huge-unlink unlink huge files after initialization 00:06:12.543 -n, --mem-channels number of memory channels used for DPDK 00:06:12.543 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:06:12.543 --msg-mempool-size global message memory pool size in count (default: 262143) 00:06:12.543 --no-huge run without using hugepages 00:06:12.543 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:06:12.543 -i, --shm-id shared memory ID (optional) 00:06:12.543 -g, --single-file-segments force creating just one hugetlbfs file 00:06:12.543 00:06:12.543 PCI options: 00:06:12.543 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:06:12.543 -B, --pci-blocked pci addr to block (can be used more than once) 00:06:12.543 -u, --no-pci disable PCI access 00:06:12.543 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:06:12.543 00:06:12.543 Log options: 00:06:12.543 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:06:12.543 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:06:12.543 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:06:12.543 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:06:12.543 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:06:12.543 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:06:12.543 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:06:12.544 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:06:12.544 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:06:12.544 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:06:12.544 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:06:12.544 --silence-noticelog disable notice level logging to stderr 00:06:12.544 00:06:12.544 Trace options: 00:06:12.544 --num-trace-entries number of trace entries for each core, must be power of 2, 00:06:12.544 setting 0 to disable trace (default 32768) 00:06:12.544 Tracepoints vary in size and can use more than one trace entry. 00:06:12.544 -e, --tpoint-group [:] 00:06:12.544 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:06:12.544 [2024-11-26 19:40:07.754817] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:06:12.544 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:06:12.544 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:06:12.544 bdev_raid, scheduler, all). 00:06:12.544 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:06:12.544 a tracepoint group. First tpoint inside a group can be enabled by 00:06:12.544 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:06:12.544 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:06:12.544 in /include/spdk_internal/trace_defs.h 00:06:12.544 00:06:12.544 Other options: 00:06:12.544 -h, --help show this usage 00:06:12.544 -v, --version print SPDK version 00:06:12.544 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:06:12.544 --env-context Opaque context for use of the env implementation 00:06:12.544 00:06:12.544 Application specific: 00:06:12.544 [--------- DD Options ---------] 00:06:12.544 --if Input file. Must specify either --if or --ib. 00:06:12.544 --ib Input bdev. Must specifier either --if or --ib 00:06:12.544 --of Output file. Must specify either --of or --ob. 00:06:12.544 --ob Output bdev. Must specify either --of or --ob. 00:06:12.544 --iflag Input file flags. 00:06:12.544 --oflag Output file flags. 00:06:12.544 --bs I/O unit size (default: 4096) 00:06:12.544 --qd Queue depth (default: 2) 00:06:12.544 --count I/O unit count. The number of I/O units to copy. (default: all) 00:06:12.544 --skip Skip this many I/O units at start of input. (default: 0) 00:06:12.544 --seek Skip this many I/O units at start of output. (default: 0) 00:06:12.544 --aio Force usage of AIO. (by default io_uring is used if available) 00:06:12.544 --sparse Enable hole skipping in input target 00:06:12.544 Available iflag and oflag values: 00:06:12.544 append - append mode 00:06:12.544 direct - use direct I/O for data 00:06:12.544 directory - fail unless a directory 00:06:12.544 dsync - use synchronized I/O for data 00:06:12.544 noatime - do not update access time 00:06:12.544 noctty - do not assign controlling terminal from file 00:06:12.544 nofollow - do not follow symlinks 00:06:12.544 nonblock - use non-blocking I/O 00:06:12.544 sync - use synchronized I/O for data and metadata 00:06:12.544 19:40:07 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # es=2 00:06:12.544 19:40:07 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:12.544 19:40:07 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:12.544 19:40:07 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:12.544 00:06:12.544 real 0m0.051s 00:06:12.544 user 0m0.034s 00:06:12.544 sys 0m0.015s 00:06:12.544 19:40:07 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.544 19:40:07 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:06:12.544 ************************************ 00:06:12.544 END TEST dd_invalid_arguments 00:06:12.544 ************************************ 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:12.802 ************************************ 00:06:12.802 START TEST dd_double_input 00:06:12.802 ************************************ 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1129 -- # double_input 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # local es=0 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:06:12.802 [2024-11-26 19:40:07.843365] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # es=22 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:12.802 00:06:12.802 real 0m0.051s 00:06:12.802 user 0m0.031s 00:06:12.802 sys 0m0.019s 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:06:12.802 ************************************ 00:06:12.802 END TEST dd_double_input 00:06:12.802 ************************************ 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:12.802 ************************************ 00:06:12.802 START TEST dd_double_output 00:06:12.802 ************************************ 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1129 -- # double_output 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # local es=0 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:06:12.802 [2024-11-26 19:40:07.927529] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # es=22 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:12.802 00:06:12.802 real 0m0.045s 00:06:12.802 user 0m0.027s 00:06:12.802 sys 0m0.017s 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:06:12.802 ************************************ 00:06:12.802 END TEST dd_double_output 00:06:12.802 ************************************ 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:12.802 ************************************ 00:06:12.802 START TEST dd_no_input 00:06:12.802 ************************************ 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1129 -- # no_input 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # local es=0 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:12.802 19:40:07 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:06:12.802 [2024-11-26 19:40:08.010350] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:06:12.802 19:40:08 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # es=22 00:06:12.802 19:40:08 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:12.802 19:40:08 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:12.802 19:40:08 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:12.802 00:06:12.802 real 0m0.047s 00:06:12.802 user 0m0.026s 00:06:12.802 sys 0m0.020s 00:06:12.802 19:40:08 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.802 19:40:08 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:06:12.802 ************************************ 00:06:12.802 END TEST dd_no_input 00:06:12.802 ************************************ 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:13.061 ************************************ 00:06:13.061 START TEST dd_no_output 00:06:13.061 ************************************ 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1129 -- # no_output 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # local es=0 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:13.061 [2024-11-26 19:40:08.100214] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # es=22 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:13.061 00:06:13.061 real 0m0.048s 00:06:13.061 user 0m0.030s 00:06:13.061 sys 0m0.017s 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:06:13.061 ************************************ 00:06:13.061 END TEST dd_no_output 00:06:13.061 ************************************ 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:13.061 ************************************ 00:06:13.061 START TEST dd_wrong_blocksize 00:06:13.061 ************************************ 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1129 -- # wrong_blocksize 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:06:13.061 [2024-11-26 19:40:08.184248] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # es=22 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:13.061 00:06:13.061 real 0m0.047s 00:06:13.061 user 0m0.033s 00:06:13.061 sys 0m0.013s 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:06:13.061 ************************************ 00:06:13.061 END TEST dd_wrong_blocksize 00:06:13.061 ************************************ 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:13.061 ************************************ 00:06:13.061 START TEST dd_smaller_blocksize 00:06:13.061 ************************************ 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1129 -- # smaller_blocksize 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:13.061 19:40:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:06:13.061 [2024-11-26 19:40:08.270207] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:06:13.061 [2024-11-26 19:40:08.270266] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60780 ] 00:06:13.319 [2024-11-26 19:40:08.408394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.319 [2024-11-26 19:40:08.444508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.319 [2024-11-26 19:40:08.475891] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:13.575 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:06:13.834 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:06:13.834 [2024-11-26 19:40:08.901885] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:06:13.834 [2024-11-26 19:40:08.901941] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:13.834 [2024-11-26 19:40:08.962723] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:13.834 19:40:09 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # es=244 00:06:13.834 19:40:09 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:13.834 19:40:09 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@664 -- # es=116 00:06:13.834 19:40:09 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@665 -- # case "$es" in 00:06:13.834 19:40:09 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@672 -- # es=1 00:06:13.834 19:40:09 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:13.834 00:06:13.834 real 0m0.774s 00:06:13.834 user 0m0.235s 00:06:13.834 sys 0m0.433s 00:06:13.834 19:40:09 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.834 19:40:09 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:06:13.834 ************************************ 00:06:13.834 END TEST dd_smaller_blocksize 00:06:13.834 ************************************ 00:06:13.834 19:40:09 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:06:13.834 19:40:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.834 19:40:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.834 19:40:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:13.834 ************************************ 00:06:13.834 START TEST dd_invalid_count 00:06:13.834 ************************************ 00:06:13.834 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1129 -- # invalid_count 00:06:13.834 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:06:13.834 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # local es=0 00:06:13.834 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:06:13.834 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:13.834 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.834 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:13.834 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.834 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:13.834 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.834 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:13.834 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:13.834 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:06:14.093 [2024-11-26 19:40:09.086610] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:06:14.093 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # es=22 00:06:14.093 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:14.093 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:14.093 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:14.093 00:06:14.093 real 0m0.049s 00:06:14.093 user 0m0.036s 00:06:14.093 sys 0m0.013s 00:06:14.093 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.093 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:06:14.093 ************************************ 00:06:14.093 END TEST dd_invalid_count 00:06:14.093 ************************************ 00:06:14.093 19:40:09 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:06:14.093 19:40:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.093 19:40:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.093 19:40:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:14.093 ************************************ 00:06:14.093 START TEST dd_invalid_oflag 00:06:14.093 ************************************ 00:06:14.093 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1129 -- # invalid_oflag 00:06:14.093 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:06:14.093 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # local es=0 00:06:14.093 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:06:14.093 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.093 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.093 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.093 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.093 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.093 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.093 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.093 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:14.093 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:06:14.093 [2024-11-26 19:40:09.174052] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:06:14.093 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # es=22 00:06:14.093 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:14.093 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:14.093 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:14.093 00:06:14.093 real 0m0.047s 00:06:14.093 user 0m0.029s 00:06:14.093 sys 0m0.018s 00:06:14.093 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.093 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:06:14.093 ************************************ 00:06:14.093 END TEST dd_invalid_oflag 00:06:14.093 ************************************ 00:06:14.093 19:40:09 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:06:14.094 19:40:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.094 19:40:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.094 19:40:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:14.094 ************************************ 00:06:14.094 START TEST dd_invalid_iflag 00:06:14.094 ************************************ 00:06:14.094 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1129 -- # invalid_iflag 00:06:14.094 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:06:14.094 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # local es=0 00:06:14.094 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:06:14.094 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.094 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.094 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.094 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.094 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.094 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.094 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.094 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:14.094 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:06:14.094 [2024-11-26 19:40:09.259951] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:06:14.094 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # es=22 00:06:14.094 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:14.094 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:14.094 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:14.094 00:06:14.094 real 0m0.044s 00:06:14.094 user 0m0.028s 00:06:14.094 sys 0m0.016s 00:06:14.094 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.094 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:06:14.094 ************************************ 00:06:14.094 END TEST dd_invalid_iflag 00:06:14.094 ************************************ 00:06:14.094 19:40:09 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:06:14.094 19:40:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.094 19:40:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.094 19:40:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:14.094 ************************************ 00:06:14.094 START TEST dd_unknown_flag 00:06:14.094 ************************************ 00:06:14.094 19:40:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1129 -- # unknown_flag 00:06:14.094 19:40:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:06:14.094 19:40:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # local es=0 00:06:14.094 19:40:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:06:14.094 19:40:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.094 19:40:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.094 19:40:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.094 19:40:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.094 19:40:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.094 19:40:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.094 19:40:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.094 19:40:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:14.094 19:40:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:06:14.352 [2024-11-26 19:40:09.345527] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:06:14.352 [2024-11-26 19:40:09.345585] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60872 ] 00:06:14.352 [2024-11-26 19:40:09.478853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.352 [2024-11-26 19:40:09.511441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.352 [2024-11-26 19:40:09.540778] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:14.352 [2024-11-26 19:40:09.564627] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:06:14.352 [2024-11-26 19:40:09.564667] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:14.352 [2024-11-26 19:40:09.564699] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:06:14.352 [2024-11-26 19:40:09.564705] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:14.352 [2024-11-26 19:40:09.564854] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:06:14.352 [2024-11-26 19:40:09.564861] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:14.352 [2024-11-26 19:40:09.564892] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:06:14.352 [2024-11-26 19:40:09.564896] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:06:14.609 [2024-11-26 19:40:09.621292] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:14.609 19:40:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # es=234 00:06:14.609 19:40:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:14.609 19:40:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@664 -- # es=106 00:06:14.609 19:40:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@665 -- # case "$es" in 00:06:14.609 19:40:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@672 -- # es=1 00:06:14.609 19:40:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:14.609 00:06:14.609 real 0m0.351s 00:06:14.609 user 0m0.170s 00:06:14.609 sys 0m0.088s 00:06:14.609 19:40:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.609 19:40:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:06:14.609 ************************************ 00:06:14.609 END TEST dd_unknown_flag 00:06:14.609 ************************************ 00:06:14.609 19:40:09 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:06:14.609 19:40:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.609 19:40:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.609 19:40:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:14.609 ************************************ 00:06:14.609 START TEST dd_invalid_json 00:06:14.609 ************************************ 00:06:14.609 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1129 -- # invalid_json 00:06:14.609 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:06:14.609 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # local es=0 00:06:14.609 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:06:14.609 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.609 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:06:14.609 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.609 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.609 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.609 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.609 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.609 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.609 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:14.610 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:06:14.610 [2024-11-26 19:40:09.732304] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:06:14.610 [2024-11-26 19:40:09.732365] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60901 ] 00:06:14.867 [2024-11-26 19:40:09.867742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.867 [2024-11-26 19:40:09.900243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.867 [2024-11-26 19:40:09.900287] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:06:14.867 [2024-11-26 19:40:09.900296] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:14.867 [2024-11-26 19:40:09.900301] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:14.867 [2024-11-26 19:40:09.900325] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:14.867 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # es=234 00:06:14.868 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:14.868 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@664 -- # es=106 00:06:14.868 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@665 -- # case "$es" in 00:06:14.868 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@672 -- # es=1 00:06:14.868 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:14.868 00:06:14.868 real 0m0.243s 00:06:14.868 user 0m0.103s 00:06:14.868 sys 0m0.039s 00:06:14.868 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.868 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:06:14.868 ************************************ 00:06:14.868 END TEST dd_invalid_json 00:06:14.868 ************************************ 00:06:14.868 19:40:09 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:06:14.868 19:40:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.868 19:40:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.868 19:40:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:14.868 ************************************ 00:06:14.868 START TEST dd_invalid_seek 00:06:14.868 ************************************ 00:06:14.868 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1129 -- # invalid_seek 00:06:14.868 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:14.868 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:14.868 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:06:14.868 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:06:14.868 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:06:14.868 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:06:14.868 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:06:14.868 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # local es=0 00:06:14.868 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:06:14.868 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.868 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:06:14.868 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:06:14.868 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:06:14.868 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.868 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.868 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.868 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.868 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.868 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:14.868 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:14.868 19:40:09 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:06:14.868 [2024-11-26 19:40:10.013226] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:06:14.868 [2024-11-26 19:40:10.013292] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60930 ] 00:06:14.868 { 00:06:14.868 "subsystems": [ 00:06:14.868 { 00:06:14.868 "subsystem": "bdev", 00:06:14.868 "config": [ 00:06:14.868 { 00:06:14.868 "params": { 00:06:14.868 "block_size": 512, 00:06:14.868 "num_blocks": 512, 00:06:14.868 "name": "malloc0" 00:06:14.868 }, 00:06:14.868 "method": "bdev_malloc_create" 00:06:14.868 }, 00:06:14.868 { 00:06:14.868 "params": { 00:06:14.868 "block_size": 512, 00:06:14.868 "num_blocks": 512, 00:06:14.868 "name": "malloc1" 00:06:14.868 }, 00:06:14.868 "method": "bdev_malloc_create" 00:06:14.868 }, 00:06:14.868 { 00:06:14.868 "method": "bdev_wait_for_examine" 00:06:14.868 } 00:06:14.868 ] 00:06:14.868 } 00:06:14.868 ] 00:06:14.868 } 00:06:15.125 [2024-11-26 19:40:10.153535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.125 [2024-11-26 19:40:10.189265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.125 [2024-11-26 19:40:10.220819] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:15.125 [2024-11-26 19:40:10.272226] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:06:15.125 [2024-11-26 19:40:10.272275] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:15.125 [2024-11-26 19:40:10.332451] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:15.383 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # es=228 00:06:15.383 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:15.383 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@664 -- # es=100 00:06:15.383 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@665 -- # case "$es" in 00:06:15.383 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@672 -- # es=1 00:06:15.383 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:15.383 00:06:15.383 real 0m0.403s 00:06:15.383 user 0m0.244s 00:06:15.383 sys 0m0.099s 00:06:15.383 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.383 ************************************ 00:06:15.383 END TEST dd_invalid_seek 00:06:15.383 ************************************ 00:06:15.383 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:06:15.383 19:40:10 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:06:15.383 19:40:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.383 19:40:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.383 19:40:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:15.383 ************************************ 00:06:15.383 START TEST dd_invalid_skip 00:06:15.383 ************************************ 00:06:15.383 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1129 -- # invalid_skip 00:06:15.383 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:15.383 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:15.383 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:06:15.383 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:06:15.383 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:06:15.383 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:06:15.383 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:06:15.383 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # local es=0 00:06:15.383 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:06:15.383 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:15.383 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:06:15.383 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:06:15.383 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:06:15.383 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:15.383 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:15.383 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:15.383 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:15.383 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:15.383 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:15.383 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:15.383 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:06:15.383 { 00:06:15.383 "subsystems": [ 00:06:15.383 { 00:06:15.383 "subsystem": "bdev", 00:06:15.383 "config": [ 00:06:15.383 { 00:06:15.383 "params": { 00:06:15.383 "block_size": 512, 00:06:15.383 "num_blocks": 512, 00:06:15.383 "name": "malloc0" 00:06:15.383 }, 00:06:15.383 "method": "bdev_malloc_create" 00:06:15.383 }, 00:06:15.383 { 00:06:15.383 "params": { 00:06:15.383 "block_size": 512, 00:06:15.383 "num_blocks": 512, 00:06:15.383 "name": "malloc1" 00:06:15.383 }, 00:06:15.383 "method": "bdev_malloc_create" 00:06:15.383 }, 00:06:15.383 { 00:06:15.383 "method": "bdev_wait_for_examine" 00:06:15.383 } 00:06:15.383 ] 00:06:15.383 } 00:06:15.383 ] 00:06:15.383 } 00:06:15.383 [2024-11-26 19:40:10.456927] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:06:15.383 [2024-11-26 19:40:10.456991] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60958 ] 00:06:15.383 [2024-11-26 19:40:10.599121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.640 [2024-11-26 19:40:10.634695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.640 [2024-11-26 19:40:10.666369] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:15.640 [2024-11-26 19:40:10.716408] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:06:15.640 [2024-11-26 19:40:10.716451] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:15.640 [2024-11-26 19:40:10.775719] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:15.640 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # es=228 00:06:15.640 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:15.640 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@664 -- # es=100 00:06:15.640 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@665 -- # case "$es" in 00:06:15.640 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@672 -- # es=1 00:06:15.640 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:15.641 00:06:15.641 real 0m0.399s 00:06:15.641 user 0m0.235s 00:06:15.641 sys 0m0.102s 00:06:15.641 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.641 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:06:15.641 ************************************ 00:06:15.641 END TEST dd_invalid_skip 00:06:15.641 ************************************ 00:06:15.641 19:40:10 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:06:15.641 19:40:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.641 19:40:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.641 19:40:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:15.641 ************************************ 00:06:15.641 START TEST dd_invalid_input_count 00:06:15.641 ************************************ 00:06:15.641 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1129 -- # invalid_input_count 00:06:15.641 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:15.641 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:15.641 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:06:15.641 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:06:15.641 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:06:15.641 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:06:15.641 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:06:15.641 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # local es=0 00:06:15.641 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:06:15.641 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:15.641 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:06:15.641 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:06:15.641 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:06:15.641 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:15.641 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:15.641 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:15.641 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:15.641 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:15.641 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:15.641 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:15.641 19:40:10 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:06:15.898 { 00:06:15.898 "subsystems": [ 00:06:15.898 { 00:06:15.898 "subsystem": "bdev", 00:06:15.898 "config": [ 00:06:15.898 { 00:06:15.898 "params": { 00:06:15.898 "block_size": 512, 00:06:15.898 "num_blocks": 512, 00:06:15.898 "name": "malloc0" 00:06:15.898 }, 00:06:15.898 "method": "bdev_malloc_create" 00:06:15.898 }, 00:06:15.898 { 00:06:15.898 "params": { 00:06:15.898 "block_size": 512, 00:06:15.898 "num_blocks": 512, 00:06:15.898 "name": "malloc1" 00:06:15.898 }, 00:06:15.898 "method": "bdev_malloc_create" 00:06:15.898 }, 00:06:15.898 { 00:06:15.898 "method": "bdev_wait_for_examine" 00:06:15.898 } 00:06:15.898 ] 00:06:15.898 } 00:06:15.898 ] 00:06:15.898 } 00:06:15.898 [2024-11-26 19:40:10.897163] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:06:15.898 [2024-11-26 19:40:10.897226] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60992 ] 00:06:15.898 [2024-11-26 19:40:11.034351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.898 [2024-11-26 19:40:11.069486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.898 [2024-11-26 19:40:11.100980] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:16.156 [2024-11-26 19:40:11.151562] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:06:16.156 [2024-11-26 19:40:11.151609] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:16.156 [2024-11-26 19:40:11.212256] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:16.156 19:40:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # es=228 00:06:16.156 19:40:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:16.156 19:40:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@664 -- # es=100 00:06:16.156 19:40:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@665 -- # case "$es" in 00:06:16.156 19:40:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@672 -- # es=1 00:06:16.156 19:40:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:16.156 00:06:16.156 real 0m0.401s 00:06:16.156 user 0m0.238s 00:06:16.156 sys 0m0.100s 00:06:16.156 19:40:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.156 ************************************ 00:06:16.156 END TEST dd_invalid_input_count 00:06:16.156 ************************************ 00:06:16.156 19:40:11 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:06:16.156 19:40:11 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:06:16.156 19:40:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.156 19:40:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.156 19:40:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:16.156 ************************************ 00:06:16.156 START TEST dd_invalid_output_count 00:06:16.156 ************************************ 00:06:16.156 19:40:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1129 -- # invalid_output_count 00:06:16.156 19:40:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:16.156 19:40:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:16.156 19:40:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:06:16.156 19:40:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:06:16.157 19:40:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # local es=0 00:06:16.157 19:40:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:06:16.157 19:40:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.157 19:40:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:06:16.157 19:40:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:06:16.157 19:40:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:06:16.157 19:40:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:16.157 19:40:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.157 19:40:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:16.157 19:40:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.157 19:40:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:16.157 19:40:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.157 19:40:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:16.157 19:40:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:06:16.157 { 00:06:16.157 "subsystems": [ 00:06:16.157 { 00:06:16.157 "subsystem": "bdev", 00:06:16.157 "config": [ 00:06:16.157 { 00:06:16.157 "params": { 00:06:16.157 "block_size": 512, 00:06:16.157 "num_blocks": 512, 00:06:16.157 "name": "malloc0" 00:06:16.157 }, 00:06:16.157 "method": "bdev_malloc_create" 00:06:16.157 }, 00:06:16.157 { 00:06:16.157 "method": "bdev_wait_for_examine" 00:06:16.157 } 00:06:16.157 ] 00:06:16.157 } 00:06:16.157 ] 00:06:16.157 } 00:06:16.157 [2024-11-26 19:40:11.336985] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:06:16.157 [2024-11-26 19:40:11.337050] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61025 ] 00:06:16.413 [2024-11-26 19:40:11.476205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.413 [2024-11-26 19:40:11.512570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.413 [2024-11-26 19:40:11.544722] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:16.413 [2024-11-26 19:40:11.587637] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:06:16.413 [2024-11-26 19:40:11.587685] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:16.413 [2024-11-26 19:40:11.649133] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:16.671 19:40:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # es=228 00:06:16.671 19:40:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:16.671 19:40:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@664 -- # es=100 00:06:16.671 19:40:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@665 -- # case "$es" in 00:06:16.671 19:40:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@672 -- # es=1 00:06:16.671 19:40:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:16.671 00:06:16.671 real 0m0.394s 00:06:16.671 user 0m0.237s 00:06:16.671 sys 0m0.096s 00:06:16.671 19:40:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.671 ************************************ 00:06:16.671 END TEST dd_invalid_output_count 00:06:16.671 ************************************ 00:06:16.671 19:40:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:06:16.671 19:40:11 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:06:16.671 19:40:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.671 19:40:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.671 19:40:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:16.671 ************************************ 00:06:16.671 START TEST dd_bs_not_multiple 00:06:16.671 ************************************ 00:06:16.671 19:40:11 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1129 -- # bs_not_multiple 00:06:16.671 19:40:11 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:16.671 19:40:11 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:16.671 19:40:11 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:06:16.671 19:40:11 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:06:16.671 19:40:11 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:06:16.671 19:40:11 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:06:16.671 19:40:11 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:06:16.671 19:40:11 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:06:16.671 19:40:11 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # local es=0 00:06:16.671 19:40:11 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:06:16.671 19:40:11 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:06:16.671 19:40:11 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.671 19:40:11 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:06:16.671 19:40:11 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:16.671 19:40:11 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.671 19:40:11 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:16.671 19:40:11 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.672 19:40:11 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:16.672 19:40:11 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:16.672 19:40:11 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:16.672 19:40:11 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:06:16.672 { 00:06:16.672 "subsystems": [ 00:06:16.672 { 00:06:16.672 "subsystem": "bdev", 00:06:16.672 "config": [ 00:06:16.672 { 00:06:16.672 "params": { 00:06:16.672 "block_size": 512, 00:06:16.672 "num_blocks": 512, 00:06:16.672 "name": "malloc0" 00:06:16.672 }, 00:06:16.672 "method": "bdev_malloc_create" 00:06:16.672 }, 00:06:16.672 { 00:06:16.672 "params": { 00:06:16.672 "block_size": 512, 00:06:16.672 "num_blocks": 512, 00:06:16.672 "name": "malloc1" 00:06:16.672 }, 00:06:16.672 "method": "bdev_malloc_create" 00:06:16.672 }, 00:06:16.672 { 00:06:16.672 "method": "bdev_wait_for_examine" 00:06:16.672 } 00:06:16.672 ] 00:06:16.672 } 00:06:16.672 ] 00:06:16.672 } 00:06:16.672 [2024-11-26 19:40:11.766658] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:06:16.672 [2024-11-26 19:40:11.766989] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61057 ] 00:06:16.672 [2024-11-26 19:40:11.906903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.930 [2024-11-26 19:40:11.943255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.930 [2024-11-26 19:40:11.975213] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:16.930 [2024-11-26 19:40:12.025583] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:06:16.930 [2024-11-26 19:40:12.025622] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:16.930 [2024-11-26 19:40:12.086258] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:16.930 19:40:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # es=234 00:06:16.930 19:40:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:16.930 19:40:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@664 -- # es=106 00:06:16.930 19:40:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@665 -- # case "$es" in 00:06:16.930 19:40:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@672 -- # es=1 00:06:16.930 19:40:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:16.930 00:06:16.930 real 0m0.403s 00:06:16.930 user 0m0.242s 00:06:16.930 sys 0m0.093s 00:06:16.930 19:40:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.930 19:40:12 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:06:16.930 ************************************ 00:06:16.930 END TEST dd_bs_not_multiple 00:06:16.930 ************************************ 00:06:16.930 00:06:16.930 real 0m4.602s 00:06:16.930 user 0m2.296s 00:06:16.930 sys 0m1.682s 00:06:16.930 19:40:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.930 19:40:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:16.930 ************************************ 00:06:16.930 END TEST spdk_dd_negative 00:06:16.930 ************************************ 00:06:17.190 00:06:17.190 real 0m58.084s 00:06:17.190 user 0m36.237s 00:06:17.190 sys 0m23.360s 00:06:17.190 19:40:12 spdk_dd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.190 19:40:12 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:17.190 ************************************ 00:06:17.190 END TEST spdk_dd 00:06:17.190 ************************************ 00:06:17.190 19:40:12 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:17.190 19:40:12 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:17.190 19:40:12 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:17.190 19:40:12 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:17.190 19:40:12 -- common/autotest_common.sh@10 -- # set +x 00:06:17.190 19:40:12 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:17.190 19:40:12 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:17.190 19:40:12 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:17.190 19:40:12 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:17.190 19:40:12 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:06:17.190 19:40:12 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:06:17.190 19:40:12 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:17.190 19:40:12 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:17.190 19:40:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.190 19:40:12 -- common/autotest_common.sh@10 -- # set +x 00:06:17.190 ************************************ 00:06:17.190 START TEST nvmf_tcp 00:06:17.190 ************************************ 00:06:17.190 19:40:12 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:17.190 * Looking for test storage... 00:06:17.190 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:17.190 19:40:12 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:17.190 19:40:12 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:17.190 19:40:12 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:17.190 19:40:12 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:17.190 19:40:12 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:17.190 19:40:12 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:17.190 19:40:12 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:17.190 19:40:12 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:17.190 19:40:12 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:17.190 19:40:12 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:17.190 19:40:12 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:17.190 19:40:12 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:17.190 19:40:12 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:17.190 19:40:12 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:17.190 19:40:12 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:17.190 19:40:12 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:17.190 19:40:12 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:17.191 19:40:12 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:17.191 19:40:12 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:17.191 19:40:12 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:17.191 19:40:12 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:17.191 19:40:12 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:17.191 19:40:12 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:17.191 19:40:12 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:17.191 19:40:12 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:17.191 19:40:12 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:17.191 19:40:12 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:17.191 19:40:12 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:17.191 19:40:12 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:17.191 19:40:12 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:17.191 19:40:12 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:17.191 19:40:12 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:17.191 19:40:12 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:17.191 19:40:12 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:17.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.191 --rc genhtml_branch_coverage=1 00:06:17.191 --rc genhtml_function_coverage=1 00:06:17.191 --rc genhtml_legend=1 00:06:17.191 --rc geninfo_all_blocks=1 00:06:17.191 --rc geninfo_unexecuted_blocks=1 00:06:17.191 00:06:17.191 ' 00:06:17.191 19:40:12 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:17.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.191 --rc genhtml_branch_coverage=1 00:06:17.191 --rc genhtml_function_coverage=1 00:06:17.191 --rc genhtml_legend=1 00:06:17.191 --rc geninfo_all_blocks=1 00:06:17.191 --rc geninfo_unexecuted_blocks=1 00:06:17.191 00:06:17.191 ' 00:06:17.191 19:40:12 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:17.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.191 --rc genhtml_branch_coverage=1 00:06:17.191 --rc genhtml_function_coverage=1 00:06:17.191 --rc genhtml_legend=1 00:06:17.191 --rc geninfo_all_blocks=1 00:06:17.191 --rc geninfo_unexecuted_blocks=1 00:06:17.191 00:06:17.191 ' 00:06:17.191 19:40:12 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:17.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.191 --rc genhtml_branch_coverage=1 00:06:17.191 --rc genhtml_function_coverage=1 00:06:17.191 --rc genhtml_legend=1 00:06:17.191 --rc geninfo_all_blocks=1 00:06:17.191 --rc geninfo_unexecuted_blocks=1 00:06:17.191 00:06:17.191 ' 00:06:17.191 19:40:12 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:17.191 19:40:12 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:17.191 19:40:12 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:17.191 19:40:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:17.191 19:40:12 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.191 19:40:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:17.191 ************************************ 00:06:17.191 START TEST nvmf_target_core 00:06:17.191 ************************************ 00:06:17.191 19:40:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:17.452 * Looking for test storage... 00:06:17.452 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:17.452 19:40:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:17.452 19:40:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:06:17.452 19:40:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:17.452 19:40:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:17.452 19:40:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:17.452 19:40:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:17.452 19:40:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:17.452 19:40:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:17.452 19:40:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:17.452 19:40:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:17.452 19:40:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:17.452 19:40:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:17.452 19:40:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:17.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.453 --rc genhtml_branch_coverage=1 00:06:17.453 --rc genhtml_function_coverage=1 00:06:17.453 --rc genhtml_legend=1 00:06:17.453 --rc geninfo_all_blocks=1 00:06:17.453 --rc geninfo_unexecuted_blocks=1 00:06:17.453 00:06:17.453 ' 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:17.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.453 --rc genhtml_branch_coverage=1 00:06:17.453 --rc genhtml_function_coverage=1 00:06:17.453 --rc genhtml_legend=1 00:06:17.453 --rc geninfo_all_blocks=1 00:06:17.453 --rc geninfo_unexecuted_blocks=1 00:06:17.453 00:06:17.453 ' 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:17.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.453 --rc genhtml_branch_coverage=1 00:06:17.453 --rc genhtml_function_coverage=1 00:06:17.453 --rc genhtml_legend=1 00:06:17.453 --rc geninfo_all_blocks=1 00:06:17.453 --rc geninfo_unexecuted_blocks=1 00:06:17.453 00:06:17.453 ' 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:17.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.453 --rc genhtml_branch_coverage=1 00:06:17.453 --rc genhtml_function_coverage=1 00:06:17.453 --rc genhtml_legend=1 00:06:17.453 --rc geninfo_all_blocks=1 00:06:17.453 --rc geninfo_unexecuted_blocks=1 00:06:17.453 00:06:17.453 ' 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=91838eb1-5852-43eb-90b2-09876f360ab2 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:17.453 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:17.453 ************************************ 00:06:17.453 START TEST nvmf_host_management 00:06:17.453 ************************************ 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:17.453 * Looking for test storage... 00:06:17.453 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:06:17.453 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:17.724 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:17.724 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:17.724 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:17.724 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:17.724 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:17.724 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:17.724 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:17.724 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:17.724 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:17.724 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:17.724 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:17.724 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:17.724 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:17.724 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:17.724 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:17.724 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:17.724 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:17.724 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:17.724 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:17.724 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:17.724 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:17.724 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:17.724 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:17.724 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:17.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.725 --rc genhtml_branch_coverage=1 00:06:17.725 --rc genhtml_function_coverage=1 00:06:17.725 --rc genhtml_legend=1 00:06:17.725 --rc geninfo_all_blocks=1 00:06:17.725 --rc geninfo_unexecuted_blocks=1 00:06:17.725 00:06:17.725 ' 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:17.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.725 --rc genhtml_branch_coverage=1 00:06:17.725 --rc genhtml_function_coverage=1 00:06:17.725 --rc genhtml_legend=1 00:06:17.725 --rc geninfo_all_blocks=1 00:06:17.725 --rc geninfo_unexecuted_blocks=1 00:06:17.725 00:06:17.725 ' 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:17.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.725 --rc genhtml_branch_coverage=1 00:06:17.725 --rc genhtml_function_coverage=1 00:06:17.725 --rc genhtml_legend=1 00:06:17.725 --rc geninfo_all_blocks=1 00:06:17.725 --rc geninfo_unexecuted_blocks=1 00:06:17.725 00:06:17.725 ' 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:17.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.725 --rc genhtml_branch_coverage=1 00:06:17.725 --rc genhtml_function_coverage=1 00:06:17.725 --rc genhtml_legend=1 00:06:17.725 --rc geninfo_all_blocks=1 00:06:17.725 --rc geninfo_unexecuted_blocks=1 00:06:17.725 00:06:17.725 ' 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=91838eb1-5852-43eb-90b2-09876f360ab2 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:17.725 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:17.725 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:06:17.726 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:06:17.726 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:06:17.726 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:17.726 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:06:17.726 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:17.726 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:06:17.726 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:17.726 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:06:17.726 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:17.726 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:17.726 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:17.726 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:17.726 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:17.726 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:17.726 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:06:17.726 Cannot find device "nvmf_init_br" 00:06:17.726 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:06:17.726 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:06:17.726 Cannot find device "nvmf_init_br2" 00:06:17.726 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:06:17.726 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:06:17.726 Cannot find device "nvmf_tgt_br" 00:06:17.726 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:06:17.726 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:06:17.726 Cannot find device "nvmf_tgt_br2" 00:06:17.726 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:06:17.726 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:06:17.726 Cannot find device "nvmf_init_br" 00:06:17.726 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:06:17.726 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:06:17.726 Cannot find device "nvmf_init_br2" 00:06:17.726 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:06:17.726 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:06:17.726 Cannot find device "nvmf_tgt_br" 00:06:17.726 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:06:17.726 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:06:17.726 Cannot find device "nvmf_tgt_br2" 00:06:17.726 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:06:17.726 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:06:17.726 Cannot find device "nvmf_br" 00:06:17.726 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:06:17.726 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:06:17.726 Cannot find device "nvmf_init_if" 00:06:17.726 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:06:17.726 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:06:17.726 Cannot find device "nvmf_init_if2" 00:06:17.726 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:06:17.726 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:17.726 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:17.726 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:06:17.726 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:17.726 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:17.726 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:06:17.726 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:06:17.726 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:17.726 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:06:17.726 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:17.726 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:17.726 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:17.726 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:17.726 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:17.726 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:06:17.726 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:06:17.726 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:06:17.986 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:06:17.986 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:06:17.986 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:06:17.986 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:06:17.986 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:06:17.986 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:06:17.986 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:17.986 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:17.986 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:17.986 19:40:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:06:17.986 19:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:06:17.986 19:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:06:17.986 19:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:06:17.986 19:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:17.986 19:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:17.986 19:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:17.986 19:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:06:17.986 19:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:06:17.986 19:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:06:17.986 19:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:17.986 19:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:06:17.986 19:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:06:17.986 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:17.986 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:06:17.986 00:06:17.986 --- 10.0.0.3 ping statistics --- 00:06:17.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:17.986 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:06:17.986 19:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:06:17.986 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:06:17.986 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:06:17.986 00:06:17.986 --- 10.0.0.4 ping statistics --- 00:06:17.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:17.986 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:06:17.986 19:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:17.986 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:17.986 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:06:17.986 00:06:17.986 --- 10.0.0.1 ping statistics --- 00:06:17.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:17.986 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:06:17.987 19:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:06:17.987 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:17.987 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:06:17.987 00:06:17.987 --- 10.0.0.2 ping statistics --- 00:06:17.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:17.987 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:06:17.987 19:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:17.987 19:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:06:17.987 19:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:17.987 19:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:17.987 19:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:17.987 19:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:17.987 19:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:17.987 19:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:17.987 19:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:17.987 19:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:17.987 19:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:17.987 19:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:17.987 19:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:17.987 19:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:17.987 19:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:17.987 19:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=61385 00:06:17.987 19:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 61385 00:06:17.987 19:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 61385 ']' 00:06:17.987 19:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:17.987 19:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.987 19:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.987 19:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.987 19:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.987 19:40:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:17.987 [2024-11-26 19:40:13.210307] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:06:17.987 [2024-11-26 19:40:13.210371] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:18.246 [2024-11-26 19:40:13.357904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:18.246 [2024-11-26 19:40:13.398620] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:18.246 [2024-11-26 19:40:13.398661] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:18.246 [2024-11-26 19:40:13.398667] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:18.246 [2024-11-26 19:40:13.398672] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:18.246 [2024-11-26 19:40:13.398677] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:18.246 [2024-11-26 19:40:13.399743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:18.246 [2024-11-26 19:40:13.400249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:18.246 [2024-11-26 19:40:13.400508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:18.246 [2024-11-26 19:40:13.400530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.246 [2024-11-26 19:40:13.433532] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:19.188 19:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.188 19:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:19.188 19:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:19.188 19:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:19.188 19:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:19.188 19:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:19.188 19:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:19.188 19:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.188 19:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:19.188 [2024-11-26 19:40:14.116078] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:19.188 19:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.188 19:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:19.188 19:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:19.188 19:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:19.188 19:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:06:19.188 19:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:19.188 19:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:19.188 19:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.188 19:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:19.188 Malloc0 00:06:19.188 [2024-11-26 19:40:14.185508] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:06:19.188 19:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.188 19:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:19.188 19:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:19.188 19:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:19.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:19.188 19:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=61439 00:06:19.188 19:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 61439 /var/tmp/bdevperf.sock 00:06:19.188 19:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 61439 ']' 00:06:19.188 19:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:19.188 19:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.188 19:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:19.188 19:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:19.188 19:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:19.188 19:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.188 19:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:19.188 19:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:19.188 19:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:19.188 19:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:19.188 19:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:19.188 { 00:06:19.188 "params": { 00:06:19.188 "name": "Nvme$subsystem", 00:06:19.188 "trtype": "$TEST_TRANSPORT", 00:06:19.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:19.188 "adrfam": "ipv4", 00:06:19.188 "trsvcid": "$NVMF_PORT", 00:06:19.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:19.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:19.188 "hdgst": ${hdgst:-false}, 00:06:19.188 "ddgst": ${ddgst:-false} 00:06:19.188 }, 00:06:19.188 "method": "bdev_nvme_attach_controller" 00:06:19.188 } 00:06:19.188 EOF 00:06:19.188 )") 00:06:19.188 19:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:19.188 19:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:19.188 19:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:19.188 19:40:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:19.188 "params": { 00:06:19.188 "name": "Nvme0", 00:06:19.188 "trtype": "tcp", 00:06:19.188 "traddr": "10.0.0.3", 00:06:19.188 "adrfam": "ipv4", 00:06:19.188 "trsvcid": "4420", 00:06:19.188 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:19.188 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:19.188 "hdgst": false, 00:06:19.188 "ddgst": false 00:06:19.188 }, 00:06:19.188 "method": "bdev_nvme_attach_controller" 00:06:19.188 }' 00:06:19.188 [2024-11-26 19:40:14.259502] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:06:19.188 [2024-11-26 19:40:14.259562] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61439 ] 00:06:19.188 [2024-11-26 19:40:14.397014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.446 [2024-11-26 19:40:14.434418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.446 [2024-11-26 19:40:14.473870] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:19.446 Running I/O for 10 seconds... 00:06:20.013 19:40:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.013 19:40:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:20.013 19:40:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:20.013 19:40:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.013 19:40:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:20.013 19:40:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.013 19:40:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:20.013 19:40:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:20.013 19:40:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:20.013 19:40:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:20.013 19:40:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:20.013 19:40:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:20.013 19:40:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:20.013 19:40:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:20.013 19:40:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:20.013 19:40:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:20.013 19:40:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.013 19:40:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:20.013 19:40:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.013 19:40:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=899 00:06:20.013 19:40:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 899 -ge 100 ']' 00:06:20.013 19:40:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:20.013 19:40:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:20.013 19:40:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:20.013 19:40:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:20.013 19:40:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.013 19:40:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:20.013 [2024-11-26 19:40:15.218489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:125184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.013 [2024-11-26 19:40:15.218530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.013 [2024-11-26 19:40:15.218545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:125312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.013 [2024-11-26 19:40:15.218552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.014 [2024-11-26 19:40:15.218560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:125440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.014 [2024-11-26 19:40:15.218566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.014 [2024-11-26 19:40:15.218574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:125568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.014 [2024-11-26 19:40:15.218579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.014 [2024-11-26 19:40:15.218587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.014 [2024-11-26 19:40:15.218608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.014 [2024-11-26 19:40:15.218616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:125824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.014 [2024-11-26 19:40:15.218622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.014 [2024-11-26 19:40:15.218630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:125952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.014 [2024-11-26 19:40:15.218635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.014 [2024-11-26 19:40:15.218643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:126080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.014 [2024-11-26 19:40:15.218648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.014 [2024-11-26 19:40:15.218656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:126208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.014 [2024-11-26 19:40:15.218662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.014 [2024-11-26 19:40:15.218670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.014 [2024-11-26 19:40:15.218675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.014 [2024-11-26 19:40:15.218683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:126464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.014 [2024-11-26 19:40:15.218688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.014 [2024-11-26 19:40:15.218696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:126592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.014 [2024-11-26 19:40:15.218701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.014 [2024-11-26 19:40:15.218708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:126720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.014 [2024-11-26 19:40:15.218721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.014 [2024-11-26 19:40:15.218729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:126848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.014 [2024-11-26 19:40:15.218734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.014 [2024-11-26 19:40:15.218747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:126976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.014 [2024-11-26 19:40:15.218753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.014 [2024-11-26 19:40:15.218760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:127104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.014 [2024-11-26 19:40:15.218776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.014 [2024-11-26 19:40:15.218784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:127232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.014 [2024-11-26 19:40:15.218789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.014 [2024-11-26 19:40:15.218797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.014 [2024-11-26 19:40:15.218803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.014 [2024-11-26 19:40:15.218811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:127488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.014 [2024-11-26 19:40:15.218816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.014 [2024-11-26 19:40:15.218824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.014 [2024-11-26 19:40:15.218829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.014 [2024-11-26 19:40:15.218837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:127744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.014 [2024-11-26 19:40:15.218843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.014 [2024-11-26 19:40:15.218850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.014 [2024-11-26 19:40:15.218855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.014 [2024-11-26 19:40:15.218863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.014 [2024-11-26 19:40:15.218868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.014 [2024-11-26 19:40:15.218876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.014 [2024-11-26 19:40:15.218881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.014 [2024-11-26 19:40:15.218889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.014 [2024-11-26 19:40:15.218894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.014 [2024-11-26 19:40:15.218902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.014 [2024-11-26 19:40:15.218908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.014 [2024-11-26 19:40:15.218915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.014 [2024-11-26 19:40:15.218921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.014 [2024-11-26 19:40:15.218929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.014 [2024-11-26 19:40:15.218934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.014 [2024-11-26 19:40:15.218941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.014 [2024-11-26 19:40:15.218947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.014 [2024-11-26 19:40:15.218954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.014 [2024-11-26 19:40:15.218960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.014 [2024-11-26 19:40:15.218969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.014 [2024-11-26 19:40:15.218975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.014 [2024-11-26 19:40:15.218982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.014 [2024-11-26 19:40:15.218987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.014 [2024-11-26 19:40:15.218995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.014 [2024-11-26 19:40:15.219001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.014 [2024-11-26 19:40:15.219012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.014 [2024-11-26 19:40:15.219018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.014 [2024-11-26 19:40:15.219026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.014 [2024-11-26 19:40:15.219031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.014 [2024-11-26 19:40:15.219039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.014 [2024-11-26 19:40:15.219044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.014 [2024-11-26 19:40:15.219052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.014 [2024-11-26 19:40:15.219057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.014 [2024-11-26 19:40:15.219065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:129920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.014 [2024-11-26 19:40:15.219070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.014 [2024-11-26 19:40:15.219078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:130048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.014 [2024-11-26 19:40:15.219084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.014 [2024-11-26 19:40:15.219091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:130176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.014 [2024-11-26 19:40:15.219097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.014 [2024-11-26 19:40:15.219104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:130304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.014 [2024-11-26 19:40:15.219109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.014 [2024-11-26 19:40:15.219117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:130432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.014 [2024-11-26 19:40:15.219122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.014 [2024-11-26 19:40:15.219130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:130560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.014 [2024-11-26 19:40:15.219135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.014 [2024-11-26 19:40:15.219142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:130688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.014 [2024-11-26 19:40:15.219148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.014 [2024-11-26 19:40:15.219155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:130816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.014 [2024-11-26 19:40:15.219161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.014 [2024-11-26 19:40:15.219168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:130944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.014 [2024-11-26 19:40:15.219173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.014 [2024-11-26 19:40:15.219183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.014 [2024-11-26 19:40:15.219188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.014 [2024-11-26 19:40:15.219196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:123008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.014 [2024-11-26 19:40:15.219202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.014 [2024-11-26 19:40:15.219209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:123136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.014 [2024-11-26 19:40:15.219214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.014 [2024-11-26 19:40:15.219224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:123264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.014 [2024-11-26 19:40:15.219230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.014 [2024-11-26 19:40:15.219237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:123392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.014 [2024-11-26 19:40:15.219243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.014 [2024-11-26 19:40:15.219251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:123520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.014 [2024-11-26 19:40:15.219256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.014 [2024-11-26 19:40:15.219263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:123648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.014 [2024-11-26 19:40:15.219269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.014 [2024-11-26 19:40:15.219276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:123776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.014 [2024-11-26 19:40:15.219282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.015 [2024-11-26 19:40:15.219289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:123904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.015 [2024-11-26 19:40:15.219295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.015 [2024-11-26 19:40:15.219302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:124032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.015 [2024-11-26 19:40:15.219307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.015 [2024-11-26 19:40:15.219314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:124160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.015 [2024-11-26 19:40:15.219320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.015 [2024-11-26 19:40:15.219328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:124288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.015 [2024-11-26 19:40:15.219333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.015 [2024-11-26 19:40:15.219341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:124416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.015 [2024-11-26 19:40:15.219346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.015 [2024-11-26 19:40:15.219353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:124544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.015 [2024-11-26 19:40:15.219359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.015 [2024-11-26 19:40:15.219366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:124672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.015 [2024-11-26 19:40:15.219371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.015 [2024-11-26 19:40:15.219379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:124800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.015 [2024-11-26 19:40:15.219384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.015 [2024-11-26 19:40:15.219394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:124928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.015 [2024-11-26 19:40:15.219399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.015 [2024-11-26 19:40:15.219407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:125056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:20.015 [2024-11-26 19:40:15.219412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.015 [2024-11-26 19:40:15.219419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb52d0 is same with the state(6) to be set 00:06:20.015 [2024-11-26 19:40:15.220585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:20.015 task offset: 125184 on job bdev=Nvme0n1 fails 00:06:20.015 00:06:20.015 Latency(us) 00:06:20.015 [2024-11-26T19:40:15.262Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:20.015 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:20.015 Job: Nvme0n1 ended in about 0.64 seconds with error 00:06:20.015 Verification LBA range: start 0x0 length 0x400 00:06:20.015 Nvme0n1 : 0.64 1495.68 93.48 99.71 0.00 39166.05 1424.15 32868.82 00:06:20.015 [2024-11-26T19:40:15.262Z] =================================================================================================================== 00:06:20.015 [2024-11-26T19:40:15.262Z] Total : 1495.68 93.48 99.71 0.00 39166.05 1424.15 32868.82 00:06:20.015 [2024-11-26 19:40:15.222720] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:20.015 [2024-11-26 19:40:15.222815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bbace0 (9): 19:40:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.015 Bad file descriptor 00:06:20.015 19:40:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:20.015 19:40:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.015 19:40:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:20.015 [2024-11-26 19:40:15.225903] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:06:20.015 [2024-11-26 19:40:15.226067] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:06:20.015 [2024-11-26 19:40:15.226165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:20.015 [2024-11-26 19:40:15.226201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:06:20.015 [2024-11-26 19:40:15.226265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:06:20.015 [2024-11-26 19:40:15.226305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:06:20.015 [2024-11-26 19:40:15.226331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bbace0 00:06:20.015 [2024-11-26 19:40:15.226366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bbace0 (9): Bad file descriptor 00:06:20.015 [2024-11-26 19:40:15.226400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:06:20.015 [2024-11-26 19:40:15.226436] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:06:20.015 [2024-11-26 19:40:15.226464] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:06:20.015 [2024-11-26 19:40:15.226483] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:06:20.015 19:40:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.015 19:40:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:21.388 19:40:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 61439 00:06:21.388 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (61439) - No such process 00:06:21.388 19:40:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:21.388 19:40:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:21.388 19:40:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:21.388 19:40:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:21.388 19:40:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:21.388 19:40:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:21.388 19:40:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:21.388 19:40:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:21.388 { 00:06:21.388 "params": { 00:06:21.388 "name": "Nvme$subsystem", 00:06:21.388 "trtype": "$TEST_TRANSPORT", 00:06:21.388 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:21.388 "adrfam": "ipv4", 00:06:21.388 "trsvcid": "$NVMF_PORT", 00:06:21.388 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:21.388 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:21.388 "hdgst": ${hdgst:-false}, 00:06:21.388 "ddgst": ${ddgst:-false} 00:06:21.388 }, 00:06:21.388 "method": "bdev_nvme_attach_controller" 00:06:21.388 } 00:06:21.388 EOF 00:06:21.388 )") 00:06:21.388 19:40:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:21.388 19:40:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:21.388 19:40:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:21.388 19:40:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:21.388 "params": { 00:06:21.388 "name": "Nvme0", 00:06:21.388 "trtype": "tcp", 00:06:21.388 "traddr": "10.0.0.3", 00:06:21.388 "adrfam": "ipv4", 00:06:21.388 "trsvcid": "4420", 00:06:21.388 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:21.388 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:21.388 "hdgst": false, 00:06:21.388 "ddgst": false 00:06:21.388 }, 00:06:21.388 "method": "bdev_nvme_attach_controller" 00:06:21.388 }' 00:06:21.388 [2024-11-26 19:40:16.271759] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:06:21.389 [2024-11-26 19:40:16.271828] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61477 ] 00:06:21.389 [2024-11-26 19:40:16.411297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.389 [2024-11-26 19:40:16.448478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.389 [2024-11-26 19:40:16.490233] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:21.389 Running I/O for 1 seconds... 00:06:22.763 1920.00 IOPS, 120.00 MiB/s 00:06:22.763 Latency(us) 00:06:22.763 [2024-11-26T19:40:18.010Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:22.763 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:22.763 Verification LBA range: start 0x0 length 0x400 00:06:22.763 Nvme0n1 : 1.03 1920.87 120.05 0.00 0.00 32561.74 3175.98 35288.62 00:06:22.763 [2024-11-26T19:40:18.010Z] =================================================================================================================== 00:06:22.763 [2024-11-26T19:40:18.010Z] Total : 1920.87 120.05 0.00 0.00 32561.74 3175.98 35288.62 00:06:22.763 19:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:22.763 19:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:22.763 19:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:06:22.763 19:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:06:22.763 19:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:22.763 19:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:22.763 19:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:22.763 19:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:22.763 19:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:22.763 19:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:22.763 19:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:22.763 rmmod nvme_tcp 00:06:22.763 rmmod nvme_fabrics 00:06:22.763 rmmod nvme_keyring 00:06:22.763 19:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:22.763 19:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:22.763 19:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:22.763 19:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 61385 ']' 00:06:22.763 19:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 61385 00:06:22.763 19:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 61385 ']' 00:06:22.763 19:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 61385 00:06:22.763 19:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:06:22.763 19:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.763 19:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61385 00:06:22.763 killing process with pid 61385 00:06:22.763 19:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:22.763 19:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:22.763 19:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61385' 00:06:22.763 19:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 61385 00:06:22.763 19:40:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 61385 00:06:22.763 [2024-11-26 19:40:17.993510] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:23.023 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:23.023 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:23.023 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:23.023 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:23.023 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:23.023 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:23.023 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:23.023 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:23.023 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:06:23.023 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:06:23.023 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:06:23.023 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:06:23.023 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:06:23.023 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:06:23.023 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:06:23.023 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:06:23.023 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:06:23.023 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:06:23.023 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:06:23.023 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:06:23.023 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:23.023 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:23.023 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:06:23.023 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:23.023 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:23.023 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:23.023 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:06:23.023 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:23.023 00:06:23.023 real 0m5.669s 00:06:23.023 user 0m20.966s 00:06:23.023 sys 0m1.167s 00:06:23.023 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.023 ************************************ 00:06:23.023 END TEST nvmf_host_management 00:06:23.023 ************************************ 00:06:23.023 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:23.284 19:40:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:23.284 19:40:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:23.284 19:40:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.284 19:40:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:23.284 ************************************ 00:06:23.284 START TEST nvmf_lvol 00:06:23.284 ************************************ 00:06:23.284 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:23.284 * Looking for test storage... 00:06:23.284 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:23.284 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:23.284 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:23.284 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:06:23.284 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:23.284 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.284 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.284 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.284 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.284 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.284 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.284 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.284 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:23.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.285 --rc genhtml_branch_coverage=1 00:06:23.285 --rc genhtml_function_coverage=1 00:06:23.285 --rc genhtml_legend=1 00:06:23.285 --rc geninfo_all_blocks=1 00:06:23.285 --rc geninfo_unexecuted_blocks=1 00:06:23.285 00:06:23.285 ' 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:23.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.285 --rc genhtml_branch_coverage=1 00:06:23.285 --rc genhtml_function_coverage=1 00:06:23.285 --rc genhtml_legend=1 00:06:23.285 --rc geninfo_all_blocks=1 00:06:23.285 --rc geninfo_unexecuted_blocks=1 00:06:23.285 00:06:23.285 ' 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:23.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.285 --rc genhtml_branch_coverage=1 00:06:23.285 --rc genhtml_function_coverage=1 00:06:23.285 --rc genhtml_legend=1 00:06:23.285 --rc geninfo_all_blocks=1 00:06:23.285 --rc geninfo_unexecuted_blocks=1 00:06:23.285 00:06:23.285 ' 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:23.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.285 --rc genhtml_branch_coverage=1 00:06:23.285 --rc genhtml_function_coverage=1 00:06:23.285 --rc genhtml_legend=1 00:06:23.285 --rc geninfo_all_blocks=1 00:06:23.285 --rc geninfo_unexecuted_blocks=1 00:06:23.285 00:06:23.285 ' 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=91838eb1-5852-43eb-90b2-09876f360ab2 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:23.285 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:06:23.285 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:06:23.286 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:06:23.286 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:23.286 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:06:23.286 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:06:23.286 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:06:23.286 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:23.286 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:06:23.286 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:23.286 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:06:23.286 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:23.286 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:06:23.286 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:23.286 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:23.286 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:23.286 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:23.286 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:23.286 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:23.286 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:06:23.286 Cannot find device "nvmf_init_br" 00:06:23.286 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:06:23.286 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:06:23.286 Cannot find device "nvmf_init_br2" 00:06:23.286 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:06:23.286 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:06:23.546 Cannot find device "nvmf_tgt_br" 00:06:23.546 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:06:23.546 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:06:23.546 Cannot find device "nvmf_tgt_br2" 00:06:23.546 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:06:23.546 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:06:23.546 Cannot find device "nvmf_init_br" 00:06:23.546 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:06:23.546 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:06:23.546 Cannot find device "nvmf_init_br2" 00:06:23.546 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:06:23.546 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:06:23.546 Cannot find device "nvmf_tgt_br" 00:06:23.546 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:06:23.546 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:06:23.546 Cannot find device "nvmf_tgt_br2" 00:06:23.546 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:06:23.546 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:06:23.546 Cannot find device "nvmf_br" 00:06:23.546 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:06:23.546 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:06:23.546 Cannot find device "nvmf_init_if" 00:06:23.546 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:06:23.546 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:06:23.546 Cannot find device "nvmf_init_if2" 00:06:23.546 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:06:23.546 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:23.546 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:23.546 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:06:23.546 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:23.546 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:23.546 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:06:23.546 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:06:23.546 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:23.546 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:06:23.546 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:23.546 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:23.546 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:23.546 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:23.546 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:23.546 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:06:23.546 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:06:23.546 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:06:23.546 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:06:23.546 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:06:23.546 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:06:23.546 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:06:23.546 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:06:23.546 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:06:23.546 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:23.546 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:23.546 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:23.546 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:06:23.546 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:06:23.546 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:06:23.546 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:06:23.546 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:23.804 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:23.804 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:23.804 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:06:23.804 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:06:23.804 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:06:23.804 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:23.804 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:06:23.804 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:06:23.804 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:23.804 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:06:23.804 00:06:23.804 --- 10.0.0.3 ping statistics --- 00:06:23.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:23.805 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:06:23.805 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:06:23.805 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:06:23.805 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.026 ms 00:06:23.805 00:06:23.805 --- 10.0.0.4 ping statistics --- 00:06:23.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:23.805 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:06:23.805 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:23.805 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:23.805 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.014 ms 00:06:23.805 00:06:23.805 --- 10.0.0.1 ping statistics --- 00:06:23.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:23.805 rtt min/avg/max/mdev = 0.014/0.014/0.014/0.000 ms 00:06:23.805 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:06:23.805 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:23.805 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:06:23.805 00:06:23.805 --- 10.0.0.2 ping statistics --- 00:06:23.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:23.805 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:06:23.805 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:23.805 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:06:23.805 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:23.805 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:23.805 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:23.805 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:23.805 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:23.805 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:23.805 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:23.805 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:23.805 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:23.805 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:23.805 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:23.805 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=61737 00:06:23.805 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 61737 00:06:23.805 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 61737 ']' 00:06:23.805 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.805 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.805 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.805 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:23.805 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.805 19:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:23.805 [2024-11-26 19:40:18.886299] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:06:23.805 [2024-11-26 19:40:18.886358] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:23.805 [2024-11-26 19:40:19.025478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:24.064 [2024-11-26 19:40:19.069007] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:24.064 [2024-11-26 19:40:19.069057] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:24.064 [2024-11-26 19:40:19.069063] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:24.064 [2024-11-26 19:40:19.069068] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:24.064 [2024-11-26 19:40:19.069073] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:24.064 [2024-11-26 19:40:19.069892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.064 [2024-11-26 19:40:19.070151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.064 [2024-11-26 19:40:19.070243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.064 [2024-11-26 19:40:19.112394] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:24.633 19:40:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.633 19:40:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:06:24.633 19:40:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:24.633 19:40:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:24.633 19:40:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:24.633 19:40:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:24.633 19:40:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:24.894 [2024-11-26 19:40:19.970787] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:24.894 19:40:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:25.154 19:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:25.154 19:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:25.415 19:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:25.415 19:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:25.415 19:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:25.675 19:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=144e9b05-9278-4c3d-af7b-a04a759e0189 00:06:25.675 19:40:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 144e9b05-9278-4c3d-af7b-a04a759e0189 lvol 20 00:06:25.934 19:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=8bd9f839-eb9e-4028-8f43-777f7d6db01b 00:06:25.934 19:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:26.263 19:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8bd9f839-eb9e-4028-8f43-777f7d6db01b 00:06:26.524 19:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:06:26.524 [2024-11-26 19:40:21.685915] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:06:26.524 19:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:06:26.784 19:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=61807 00:06:26.784 19:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:26.784 19:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:27.726 19:40:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 8bd9f839-eb9e-4028-8f43-777f7d6db01b MY_SNAPSHOT 00:06:27.984 19:40:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=09f7d3e0-b17f-4ccc-be2c-a0bee1a58319 00:06:27.984 19:40:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 8bd9f839-eb9e-4028-8f43-777f7d6db01b 30 00:06:28.241 19:40:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 09f7d3e0-b17f-4ccc-be2c-a0bee1a58319 MY_CLONE 00:06:28.499 19:40:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=d8080778-2964-478d-baf6-3459f6ca6b44 00:06:28.499 19:40:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate d8080778-2964-478d-baf6-3459f6ca6b44 00:06:28.758 19:40:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 61807 00:06:38.722 Initializing NVMe Controllers 00:06:38.722 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:06:38.722 Controller IO queue size 128, less than required. 00:06:38.722 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:38.722 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:38.722 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:38.722 Initialization complete. Launching workers. 00:06:38.722 ======================================================== 00:06:38.722 Latency(us) 00:06:38.722 Device Information : IOPS MiB/s Average min max 00:06:38.722 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 14865.79 58.07 8610.85 1051.95 61386.01 00:06:38.722 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 14715.20 57.48 8698.33 3322.16 40001.63 00:06:38.722 ======================================================== 00:06:38.722 Total : 29580.99 115.55 8654.37 1051.95 61386.01 00:06:38.722 00:06:38.722 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:38.722 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 8bd9f839-eb9e-4028-8f43-777f7d6db01b 00:06:38.722 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 144e9b05-9278-4c3d-af7b-a04a759e0189 00:06:38.722 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:38.722 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:38.722 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:38.722 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:38.722 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:06:38.722 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:38.722 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:06:38.722 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:38.722 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:38.722 rmmod nvme_tcp 00:06:38.722 rmmod nvme_fabrics 00:06:38.722 rmmod nvme_keyring 00:06:38.722 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:38.722 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:06:38.723 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:06:38.723 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 61737 ']' 00:06:38.723 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 61737 00:06:38.723 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 61737 ']' 00:06:38.723 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 61737 00:06:38.723 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:06:38.723 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:38.723 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61737 00:06:38.723 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:38.723 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:38.723 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61737' 00:06:38.723 killing process with pid 61737 00:06:38.723 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 61737 00:06:38.723 19:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 61737 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:06:38.723 00:06:38.723 real 0m14.975s 00:06:38.723 user 1m2.503s 00:06:38.723 sys 0m3.582s 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:38.723 ************************************ 00:06:38.723 END TEST nvmf_lvol 00:06:38.723 ************************************ 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:38.723 ************************************ 00:06:38.723 START TEST nvmf_lvs_grow 00:06:38.723 ************************************ 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:38.723 * Looking for test storage... 00:06:38.723 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:38.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.723 --rc genhtml_branch_coverage=1 00:06:38.723 --rc genhtml_function_coverage=1 00:06:38.723 --rc genhtml_legend=1 00:06:38.723 --rc geninfo_all_blocks=1 00:06:38.723 --rc geninfo_unexecuted_blocks=1 00:06:38.723 00:06:38.723 ' 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:38.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.723 --rc genhtml_branch_coverage=1 00:06:38.723 --rc genhtml_function_coverage=1 00:06:38.723 --rc genhtml_legend=1 00:06:38.723 --rc geninfo_all_blocks=1 00:06:38.723 --rc geninfo_unexecuted_blocks=1 00:06:38.723 00:06:38.723 ' 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:38.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.723 --rc genhtml_branch_coverage=1 00:06:38.723 --rc genhtml_function_coverage=1 00:06:38.723 --rc genhtml_legend=1 00:06:38.723 --rc geninfo_all_blocks=1 00:06:38.723 --rc geninfo_unexecuted_blocks=1 00:06:38.723 00:06:38.723 ' 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:38.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.723 --rc genhtml_branch_coverage=1 00:06:38.723 --rc genhtml_function_coverage=1 00:06:38.723 --rc genhtml_legend=1 00:06:38.723 --rc geninfo_all_blocks=1 00:06:38.723 --rc geninfo_unexecuted_blocks=1 00:06:38.723 00:06:38.723 ' 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:38.723 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=91838eb1-5852-43eb-90b2-09876f360ab2 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:38.724 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:06:38.724 Cannot find device "nvmf_init_br" 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:06:38.724 Cannot find device "nvmf_init_br2" 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:06:38.724 Cannot find device "nvmf_tgt_br" 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:06:38.724 Cannot find device "nvmf_tgt_br2" 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:06:38.724 Cannot find device "nvmf_init_br" 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:06:38.724 Cannot find device "nvmf_init_br2" 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:06:38.724 Cannot find device "nvmf_tgt_br" 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:06:38.724 Cannot find device "nvmf_tgt_br2" 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:06:38.724 Cannot find device "nvmf_br" 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:06:38.724 Cannot find device "nvmf_init_if" 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:06:38.724 Cannot find device "nvmf_init_if2" 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:38.724 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:38.724 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:38.725 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:06:38.725 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:38.725 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:06:38.725 00:06:38.725 --- 10.0.0.3 ping statistics --- 00:06:38.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:38.725 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:06:38.725 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:06:38.725 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:06:38.725 00:06:38.725 --- 10.0.0.4 ping statistics --- 00:06:38.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:38.725 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:38.725 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:38.725 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:06:38.725 00:06:38.725 --- 10.0.0.1 ping statistics --- 00:06:38.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:38.725 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:06:38.725 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:38.725 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.040 ms 00:06:38.725 00:06:38.725 --- 10.0.0.2 ping statistics --- 00:06:38.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:38.725 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=62180 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 62180 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 62180 ']' 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:38.725 19:40:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:06:38.725 [2024-11-26 19:40:33.904799] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:06:38.725 [2024-11-26 19:40:33.905268] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:38.983 [2024-11-26 19:40:34.048377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.983 [2024-11-26 19:40:34.085582] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:38.983 [2024-11-26 19:40:34.085629] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:38.983 [2024-11-26 19:40:34.085636] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:38.983 [2024-11-26 19:40:34.085641] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:38.983 [2024-11-26 19:40:34.085645] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:38.983 [2024-11-26 19:40:34.085916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.983 [2024-11-26 19:40:34.118486] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:39.550 19:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.550 19:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:06:39.550 19:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:39.550 19:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:39.550 19:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:39.808 19:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:39.808 19:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:39.808 [2024-11-26 19:40:34.996832] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:39.808 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:06:39.808 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:39.808 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.808 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:39.808 ************************************ 00:06:39.808 START TEST lvs_grow_clean 00:06:39.808 ************************************ 00:06:39.808 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:06:39.808 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:39.808 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:39.808 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:39.809 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:39.809 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:39.809 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:39.809 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:06:39.809 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:06:39.809 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:40.067 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:40.067 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:40.325 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=6233137e-7bb1-45e5-b429-5e13973546ad 00:06:40.325 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6233137e-7bb1-45e5-b429-5e13973546ad 00:06:40.325 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:40.584 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:40.584 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:40.584 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6233137e-7bb1-45e5-b429-5e13973546ad lvol 150 00:06:40.842 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=660b17f0-1a22-442c-939d-6f32dc48115a 00:06:40.842 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:06:40.842 19:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:06:40.842 [2024-11-26 19:40:36.030892] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:06:40.842 [2024-11-26 19:40:36.030954] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:40.842 true 00:06:40.842 19:40:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6233137e-7bb1-45e5-b429-5e13973546ad 00:06:40.842 19:40:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:06:41.101 19:40:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:06:41.101 19:40:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:41.359 19:40:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 660b17f0-1a22-442c-939d-6f32dc48115a 00:06:41.618 19:40:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:06:41.875 [2024-11-26 19:40:36.875342] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:06:41.875 19:40:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:06:41.875 19:40:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=62259 00:06:41.875 19:40:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:41.875 19:40:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:06:41.875 19:40:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 62259 /var/tmp/bdevperf.sock 00:06:41.875 19:40:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 62259 ']' 00:06:41.875 19:40:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:41.875 19:40:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:41.875 19:40:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:41.875 19:40:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.875 19:40:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:42.133 [2024-11-26 19:40:37.133917] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:06:42.133 [2024-11-26 19:40:37.133974] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62259 ] 00:06:42.133 [2024-11-26 19:40:37.271799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.133 [2024-11-26 19:40:37.308419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.133 [2024-11-26 19:40:37.342556] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:43.076 19:40:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:43.076 19:40:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:06:43.076 19:40:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:06:43.363 Nvme0n1 00:06:43.363 19:40:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:06:43.363 [ 00:06:43.363 { 00:06:43.363 "name": "Nvme0n1", 00:06:43.363 "aliases": [ 00:06:43.363 "660b17f0-1a22-442c-939d-6f32dc48115a" 00:06:43.364 ], 00:06:43.364 "product_name": "NVMe disk", 00:06:43.364 "block_size": 4096, 00:06:43.364 "num_blocks": 38912, 00:06:43.364 "uuid": "660b17f0-1a22-442c-939d-6f32dc48115a", 00:06:43.364 "numa_id": -1, 00:06:43.364 "assigned_rate_limits": { 00:06:43.364 "rw_ios_per_sec": 0, 00:06:43.364 "rw_mbytes_per_sec": 0, 00:06:43.364 "r_mbytes_per_sec": 0, 00:06:43.364 "w_mbytes_per_sec": 0 00:06:43.364 }, 00:06:43.364 "claimed": false, 00:06:43.364 "zoned": false, 00:06:43.364 "supported_io_types": { 00:06:43.364 "read": true, 00:06:43.364 "write": true, 00:06:43.364 "unmap": true, 00:06:43.364 "flush": true, 00:06:43.364 "reset": true, 00:06:43.364 "nvme_admin": true, 00:06:43.364 "nvme_io": true, 00:06:43.364 "nvme_io_md": false, 00:06:43.364 "write_zeroes": true, 00:06:43.364 "zcopy": false, 00:06:43.364 "get_zone_info": false, 00:06:43.364 "zone_management": false, 00:06:43.364 "zone_append": false, 00:06:43.364 "compare": true, 00:06:43.364 "compare_and_write": true, 00:06:43.364 "abort": true, 00:06:43.364 "seek_hole": false, 00:06:43.364 "seek_data": false, 00:06:43.364 "copy": true, 00:06:43.364 "nvme_iov_md": false 00:06:43.364 }, 00:06:43.364 "memory_domains": [ 00:06:43.364 { 00:06:43.364 "dma_device_id": "system", 00:06:43.364 "dma_device_type": 1 00:06:43.364 } 00:06:43.364 ], 00:06:43.364 "driver_specific": { 00:06:43.364 "nvme": [ 00:06:43.364 { 00:06:43.364 "trid": { 00:06:43.364 "trtype": "TCP", 00:06:43.364 "adrfam": "IPv4", 00:06:43.364 "traddr": "10.0.0.3", 00:06:43.364 "trsvcid": "4420", 00:06:43.364 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:06:43.364 }, 00:06:43.364 "ctrlr_data": { 00:06:43.364 "cntlid": 1, 00:06:43.364 "vendor_id": "0x8086", 00:06:43.364 "model_number": "SPDK bdev Controller", 00:06:43.364 "serial_number": "SPDK0", 00:06:43.364 "firmware_revision": "25.01", 00:06:43.364 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:43.364 "oacs": { 00:06:43.364 "security": 0, 00:06:43.364 "format": 0, 00:06:43.364 "firmware": 0, 00:06:43.364 "ns_manage": 0 00:06:43.364 }, 00:06:43.364 "multi_ctrlr": true, 00:06:43.364 "ana_reporting": false 00:06:43.364 }, 00:06:43.364 "vs": { 00:06:43.364 "nvme_version": "1.3" 00:06:43.364 }, 00:06:43.364 "ns_data": { 00:06:43.364 "id": 1, 00:06:43.364 "can_share": true 00:06:43.364 } 00:06:43.364 } 00:06:43.364 ], 00:06:43.364 "mp_policy": "active_passive" 00:06:43.364 } 00:06:43.364 } 00:06:43.364 ] 00:06:43.364 19:40:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=62283 00:06:43.364 19:40:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:06:43.364 19:40:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:06:43.621 Running I/O for 10 seconds... 00:06:44.552 Latency(us) 00:06:44.552 [2024-11-26T19:40:39.799Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:44.552 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:44.552 Nvme0n1 : 1.00 10715.00 41.86 0.00 0.00 0.00 0.00 0.00 00:06:44.552 [2024-11-26T19:40:39.799Z] =================================================================================================================== 00:06:44.552 [2024-11-26T19:40:39.799Z] Total : 10715.00 41.86 0.00 0.00 0.00 0.00 0.00 00:06:44.552 00:06:45.485 19:40:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6233137e-7bb1-45e5-b429-5e13973546ad 00:06:45.485 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:45.485 Nvme0n1 : 2.00 11111.50 43.40 0.00 0.00 0.00 0.00 0.00 00:06:45.485 [2024-11-26T19:40:40.732Z] =================================================================================================================== 00:06:45.485 [2024-11-26T19:40:40.732Z] Total : 11111.50 43.40 0.00 0.00 0.00 0.00 0.00 00:06:45.485 00:06:46.473 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:46.473 Nvme0n1 : 3.00 7830.33 30.59 0.00 0.00 0.00 0.00 0.00 00:06:46.473 [2024-11-26T19:40:41.720Z] =================================================================================================================== 00:06:46.473 [2024-11-26T19:40:41.720Z] Total : 7830.33 30.59 0.00 0.00 0.00 0.00 0.00 00:06:46.473 00:06:47.038 true 00:06:47.038 19:40:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6233137e-7bb1-45e5-b429-5e13973546ad 00:06:47.038 19:40:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:06:47.038 19:40:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:06:47.038 19:40:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:06:47.038 19:40:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 62283 00:06:47.604 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:47.604 Nvme0n1 : 4.00 7424.75 29.00 0.00 0.00 0.00 0.00 0.00 00:06:47.604 [2024-11-26T19:40:42.851Z] =================================================================================================================== 00:06:47.604 [2024-11-26T19:40:42.851Z] Total : 7424.75 29.00 0.00 0.00 0.00 0.00 0.00 00:06:47.604 00:06:48.536 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:48.536 Nvme0n1 : 5.00 8152.60 31.85 0.00 0.00 0.00 0.00 0.00 00:06:48.536 [2024-11-26T19:40:43.783Z] =================================================================================================================== 00:06:48.536 [2024-11-26T19:40:43.784Z] Total : 8152.60 31.85 0.00 0.00 0.00 0.00 0.00 00:06:48.537 00:06:49.468 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:49.468 Nvme0n1 : 6.00 8237.00 32.18 0.00 0.00 0.00 0.00 0.00 00:06:49.468 [2024-11-26T19:40:44.715Z] =================================================================================================================== 00:06:49.468 [2024-11-26T19:40:44.715Z] Total : 8237.00 32.18 0.00 0.00 0.00 0.00 0.00 00:06:49.468 00:06:50.839 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:50.839 Nvme0n1 : 7.00 8375.29 32.72 0.00 0.00 0.00 0.00 0.00 00:06:50.839 [2024-11-26T19:40:46.086Z] =================================================================================================================== 00:06:50.839 [2024-11-26T19:40:46.086Z] Total : 8375.29 32.72 0.00 0.00 0.00 0.00 0.00 00:06:50.839 00:06:51.772 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:51.772 Nvme0n1 : 8.00 8471.38 33.09 0.00 0.00 0.00 0.00 0.00 00:06:51.772 [2024-11-26T19:40:47.019Z] =================================================================================================================== 00:06:51.772 [2024-11-26T19:40:47.019Z] Total : 8471.38 33.09 0.00 0.00 0.00 0.00 0.00 00:06:51.772 00:06:52.704 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:52.704 Nvme0n1 : 9.00 8529.78 33.32 0.00 0.00 0.00 0.00 0.00 00:06:52.704 [2024-11-26T19:40:47.951Z] =================================================================================================================== 00:06:52.705 [2024-11-26T19:40:47.952Z] Total : 8529.78 33.32 0.00 0.00 0.00 0.00 0.00 00:06:52.705 00:06:53.690 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:53.690 Nvme0n1 : 10.00 8756.30 34.20 0.00 0.00 0.00 0.00 0.00 00:06:53.690 [2024-11-26T19:40:48.937Z] =================================================================================================================== 00:06:53.690 [2024-11-26T19:40:48.937Z] Total : 8756.30 34.20 0.00 0.00 0.00 0.00 0.00 00:06:53.690 00:06:53.690 00:06:53.690 Latency(us) 00:06:53.690 [2024-11-26T19:40:48.937Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:53.690 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:53.690 Nvme0n1 : 10.00 8764.79 34.24 0.00 0.00 14598.86 3554.07 1226027.32 00:06:53.690 [2024-11-26T19:40:48.937Z] =================================================================================================================== 00:06:53.690 [2024-11-26T19:40:48.937Z] Total : 8764.79 34.24 0.00 0.00 14598.86 3554.07 1226027.32 00:06:53.690 { 00:06:53.690 "results": [ 00:06:53.690 { 00:06:53.690 "job": "Nvme0n1", 00:06:53.690 "core_mask": "0x2", 00:06:53.690 "workload": "randwrite", 00:06:53.690 "status": "finished", 00:06:53.690 "queue_depth": 128, 00:06:53.690 "io_size": 4096, 00:06:53.690 "runtime": 10.004916, 00:06:53.690 "iops": 8764.791228632004, 00:06:53.690 "mibps": 34.23746573684377, 00:06:53.690 "io_failed": 0, 00:06:53.690 "io_timeout": 0, 00:06:53.690 "avg_latency_us": 14598.858108726185, 00:06:53.690 "min_latency_us": 3554.067692307692, 00:06:53.690 "max_latency_us": 1226027.3230769231 00:06:53.690 } 00:06:53.690 ], 00:06:53.690 "core_count": 1 00:06:53.690 } 00:06:53.690 19:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 62259 00:06:53.690 19:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 62259 ']' 00:06:53.690 19:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 62259 00:06:53.690 19:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:06:53.690 19:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:53.690 19:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62259 00:06:53.690 19:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:53.690 19:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:53.690 19:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62259' 00:06:53.690 killing process with pid 62259 00:06:53.690 19:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 62259 00:06:53.690 Received shutdown signal, test time was about 10.000000 seconds 00:06:53.690 00:06:53.690 Latency(us) 00:06:53.690 [2024-11-26T19:40:48.937Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:53.690 [2024-11-26T19:40:48.937Z] =================================================================================================================== 00:06:53.690 [2024-11-26T19:40:48.937Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:53.690 19:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 62259 00:06:53.690 19:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:06:53.949 19:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:54.206 19:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6233137e-7bb1-45e5-b429-5e13973546ad 00:06:54.206 19:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:06:54.521 19:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:06:54.521 19:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:06:54.521 19:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:06:54.521 [2024-11-26 19:40:49.618198] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:06:54.521 19:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6233137e-7bb1-45e5-b429-5e13973546ad 00:06:54.521 19:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:06:54.521 19:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6233137e-7bb1-45e5-b429-5e13973546ad 00:06:54.521 19:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:54.521 19:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:54.521 19:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:54.521 19:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:54.522 19:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:54.522 19:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:54.522 19:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:54.522 19:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:54.522 19:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6233137e-7bb1-45e5-b429-5e13973546ad 00:06:54.796 request: 00:06:54.796 { 00:06:54.796 "uuid": "6233137e-7bb1-45e5-b429-5e13973546ad", 00:06:54.796 "method": "bdev_lvol_get_lvstores", 00:06:54.796 "req_id": 1 00:06:54.796 } 00:06:54.796 Got JSON-RPC error response 00:06:54.796 response: 00:06:54.796 { 00:06:54.796 "code": -19, 00:06:54.796 "message": "No such device" 00:06:54.796 } 00:06:54.796 19:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:06:54.796 19:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:54.796 19:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:54.796 19:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:54.796 19:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:55.054 aio_bdev 00:06:55.054 19:40:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 660b17f0-1a22-442c-939d-6f32dc48115a 00:06:55.054 19:40:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=660b17f0-1a22-442c-939d-6f32dc48115a 00:06:55.054 19:40:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:55.054 19:40:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:06:55.054 19:40:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:55.054 19:40:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:55.054 19:40:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:06:55.054 19:40:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 660b17f0-1a22-442c-939d-6f32dc48115a -t 2000 00:06:55.313 [ 00:06:55.313 { 00:06:55.313 "name": "660b17f0-1a22-442c-939d-6f32dc48115a", 00:06:55.313 "aliases": [ 00:06:55.313 "lvs/lvol" 00:06:55.313 ], 00:06:55.313 "product_name": "Logical Volume", 00:06:55.313 "block_size": 4096, 00:06:55.313 "num_blocks": 38912, 00:06:55.313 "uuid": "660b17f0-1a22-442c-939d-6f32dc48115a", 00:06:55.313 "assigned_rate_limits": { 00:06:55.313 "rw_ios_per_sec": 0, 00:06:55.313 "rw_mbytes_per_sec": 0, 00:06:55.313 "r_mbytes_per_sec": 0, 00:06:55.313 "w_mbytes_per_sec": 0 00:06:55.313 }, 00:06:55.313 "claimed": false, 00:06:55.313 "zoned": false, 00:06:55.313 "supported_io_types": { 00:06:55.313 "read": true, 00:06:55.313 "write": true, 00:06:55.313 "unmap": true, 00:06:55.313 "flush": false, 00:06:55.313 "reset": true, 00:06:55.313 "nvme_admin": false, 00:06:55.313 "nvme_io": false, 00:06:55.313 "nvme_io_md": false, 00:06:55.313 "write_zeroes": true, 00:06:55.313 "zcopy": false, 00:06:55.313 "get_zone_info": false, 00:06:55.313 "zone_management": false, 00:06:55.313 "zone_append": false, 00:06:55.313 "compare": false, 00:06:55.313 "compare_and_write": false, 00:06:55.313 "abort": false, 00:06:55.313 "seek_hole": true, 00:06:55.313 "seek_data": true, 00:06:55.313 "copy": false, 00:06:55.313 "nvme_iov_md": false 00:06:55.313 }, 00:06:55.313 "driver_specific": { 00:06:55.313 "lvol": { 00:06:55.313 "lvol_store_uuid": "6233137e-7bb1-45e5-b429-5e13973546ad", 00:06:55.313 "base_bdev": "aio_bdev", 00:06:55.313 "thin_provision": false, 00:06:55.314 "num_allocated_clusters": 38, 00:06:55.314 "snapshot": false, 00:06:55.314 "clone": false, 00:06:55.314 "esnap_clone": false 00:06:55.314 } 00:06:55.314 } 00:06:55.314 } 00:06:55.314 ] 00:06:55.314 19:40:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:06:55.314 19:40:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6233137e-7bb1-45e5-b429-5e13973546ad 00:06:55.314 19:40:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:06:55.572 19:40:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:06:55.572 19:40:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:06:55.572 19:40:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6233137e-7bb1-45e5-b429-5e13973546ad 00:06:55.829 19:40:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:06:55.829 19:40:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 660b17f0-1a22-442c-939d-6f32dc48115a 00:06:55.829 19:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6233137e-7bb1-45e5-b429-5e13973546ad 00:06:56.086 19:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:06:56.343 19:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:06:56.599 ************************************ 00:06:56.599 END TEST lvs_grow_clean 00:06:56.599 ************************************ 00:06:56.599 00:06:56.599 real 0m16.768s 00:06:56.599 user 0m16.032s 00:06:56.599 sys 0m1.954s 00:06:56.599 19:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.599 19:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:56.599 19:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:06:56.599 19:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:56.599 19:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.599 19:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:56.599 ************************************ 00:06:56.599 START TEST lvs_grow_dirty 00:06:56.599 ************************************ 00:06:56.599 19:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:06:56.599 19:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:56.599 19:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:56.599 19:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:56.599 19:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:56.599 19:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:56.599 19:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:56.599 19:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:06:56.599 19:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:06:56.599 19:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:56.855 19:40:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:56.855 19:40:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:57.112 19:40:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=f2897719-6bd4-4941-967e-cdb4cb74014e 00:06:57.112 19:40:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f2897719-6bd4-4941-967e-cdb4cb74014e 00:06:57.112 19:40:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:57.370 19:40:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:57.370 19:40:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:57.370 19:40:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u f2897719-6bd4-4941-967e-cdb4cb74014e lvol 150 00:06:57.626 19:40:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=b9ffdbfd-2a1c-49ff-bbfa-ccb6a303ce4b 00:06:57.626 19:40:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:06:57.626 19:40:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:06:57.626 [2024-11-26 19:40:52.820303] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:06:57.626 [2024-11-26 19:40:52.820360] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:57.626 true 00:06:57.627 19:40:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f2897719-6bd4-4941-967e-cdb4cb74014e 00:06:57.627 19:40:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:06:57.884 19:40:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:06:57.884 19:40:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:58.141 19:40:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b9ffdbfd-2a1c-49ff-bbfa-ccb6a303ce4b 00:06:58.399 19:40:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:06:58.399 [2024-11-26 19:40:53.596643] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:06:58.399 19:40:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:06:58.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:58.656 19:40:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=62521 00:06:58.656 19:40:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:58.656 19:40:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 62521 /var/tmp/bdevperf.sock 00:06:58.656 19:40:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 62521 ']' 00:06:58.656 19:40:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:06:58.656 19:40:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:58.656 19:40:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:58.656 19:40:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:58.656 19:40:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:58.656 19:40:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:06:58.656 [2024-11-26 19:40:53.840795] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:06:58.656 [2024-11-26 19:40:53.840978] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62521 ] 00:06:58.912 [2024-11-26 19:40:53.979991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.912 [2024-11-26 19:40:54.016402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.912 [2024-11-26 19:40:54.046849] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:59.850 19:40:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:59.850 19:40:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:06:59.850 19:40:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:06:59.850 Nvme0n1 00:06:59.850 19:40:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:00.107 [ 00:07:00.107 { 00:07:00.107 "name": "Nvme0n1", 00:07:00.107 "aliases": [ 00:07:00.107 "b9ffdbfd-2a1c-49ff-bbfa-ccb6a303ce4b" 00:07:00.107 ], 00:07:00.107 "product_name": "NVMe disk", 00:07:00.107 "block_size": 4096, 00:07:00.107 "num_blocks": 38912, 00:07:00.107 "uuid": "b9ffdbfd-2a1c-49ff-bbfa-ccb6a303ce4b", 00:07:00.107 "numa_id": -1, 00:07:00.107 "assigned_rate_limits": { 00:07:00.107 "rw_ios_per_sec": 0, 00:07:00.107 "rw_mbytes_per_sec": 0, 00:07:00.107 "r_mbytes_per_sec": 0, 00:07:00.107 "w_mbytes_per_sec": 0 00:07:00.107 }, 00:07:00.107 "claimed": false, 00:07:00.107 "zoned": false, 00:07:00.107 "supported_io_types": { 00:07:00.107 "read": true, 00:07:00.107 "write": true, 00:07:00.107 "unmap": true, 00:07:00.107 "flush": true, 00:07:00.107 "reset": true, 00:07:00.107 "nvme_admin": true, 00:07:00.107 "nvme_io": true, 00:07:00.107 "nvme_io_md": false, 00:07:00.107 "write_zeroes": true, 00:07:00.107 "zcopy": false, 00:07:00.107 "get_zone_info": false, 00:07:00.107 "zone_management": false, 00:07:00.107 "zone_append": false, 00:07:00.107 "compare": true, 00:07:00.107 "compare_and_write": true, 00:07:00.107 "abort": true, 00:07:00.107 "seek_hole": false, 00:07:00.107 "seek_data": false, 00:07:00.107 "copy": true, 00:07:00.107 "nvme_iov_md": false 00:07:00.107 }, 00:07:00.107 "memory_domains": [ 00:07:00.107 { 00:07:00.107 "dma_device_id": "system", 00:07:00.107 "dma_device_type": 1 00:07:00.107 } 00:07:00.107 ], 00:07:00.107 "driver_specific": { 00:07:00.107 "nvme": [ 00:07:00.107 { 00:07:00.107 "trid": { 00:07:00.107 "trtype": "TCP", 00:07:00.107 "adrfam": "IPv4", 00:07:00.107 "traddr": "10.0.0.3", 00:07:00.107 "trsvcid": "4420", 00:07:00.107 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:00.107 }, 00:07:00.107 "ctrlr_data": { 00:07:00.107 "cntlid": 1, 00:07:00.107 "vendor_id": "0x8086", 00:07:00.107 "model_number": "SPDK bdev Controller", 00:07:00.107 "serial_number": "SPDK0", 00:07:00.107 "firmware_revision": "25.01", 00:07:00.107 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:00.107 "oacs": { 00:07:00.107 "security": 0, 00:07:00.107 "format": 0, 00:07:00.107 "firmware": 0, 00:07:00.107 "ns_manage": 0 00:07:00.107 }, 00:07:00.107 "multi_ctrlr": true, 00:07:00.107 "ana_reporting": false 00:07:00.107 }, 00:07:00.107 "vs": { 00:07:00.107 "nvme_version": "1.3" 00:07:00.107 }, 00:07:00.107 "ns_data": { 00:07:00.107 "id": 1, 00:07:00.107 "can_share": true 00:07:00.107 } 00:07:00.107 } 00:07:00.107 ], 00:07:00.107 "mp_policy": "active_passive" 00:07:00.107 } 00:07:00.107 } 00:07:00.107 ] 00:07:00.107 19:40:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=62539 00:07:00.107 19:40:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:00.107 19:40:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:00.107 Running I/O for 10 seconds... 00:07:01.480 Latency(us) 00:07:01.480 [2024-11-26T19:40:56.727Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:01.480 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:01.480 Nvme0n1 : 1.00 8119.00 31.71 0.00 0.00 0.00 0.00 0.00 00:07:01.480 [2024-11-26T19:40:56.727Z] =================================================================================================================== 00:07:01.480 [2024-11-26T19:40:56.727Z] Total : 8119.00 31.71 0.00 0.00 0.00 0.00 0.00 00:07:01.480 00:07:02.123 19:40:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f2897719-6bd4-4941-967e-cdb4cb74014e 00:07:02.123 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:02.123 Nvme0n1 : 2.00 9570.00 37.38 0.00 0.00 0.00 0.00 0.00 00:07:02.123 [2024-11-26T19:40:57.370Z] =================================================================================================================== 00:07:02.123 [2024-11-26T19:40:57.370Z] Total : 9570.00 37.38 0.00 0.00 0.00 0.00 0.00 00:07:02.123 00:07:02.382 true 00:07:02.382 19:40:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f2897719-6bd4-4941-967e-cdb4cb74014e 00:07:02.382 19:40:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:02.639 19:40:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:02.639 19:40:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:02.639 19:40:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 62539 00:07:03.206 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:03.206 Nvme0n1 : 3.00 10168.33 39.72 0.00 0.00 0.00 0.00 0.00 00:07:03.206 [2024-11-26T19:40:58.453Z] =================================================================================================================== 00:07:03.206 [2024-11-26T19:40:58.453Z] Total : 10168.33 39.72 0.00 0.00 0.00 0.00 0.00 00:07:03.206 00:07:04.136 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:04.136 Nvme0n1 : 4.00 10398.25 40.62 0.00 0.00 0.00 0.00 0.00 00:07:04.136 [2024-11-26T19:40:59.383Z] =================================================================================================================== 00:07:04.136 [2024-11-26T19:40:59.383Z] Total : 10398.25 40.62 0.00 0.00 0.00 0.00 0.00 00:07:04.136 00:07:05.070 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:05.070 Nvme0n1 : 5.00 10502.80 41.03 0.00 0.00 0.00 0.00 0.00 00:07:05.070 [2024-11-26T19:41:00.317Z] =================================================================================================================== 00:07:05.070 [2024-11-26T19:41:00.317Z] Total : 10502.80 41.03 0.00 0.00 0.00 0.00 0.00 00:07:05.070 00:07:06.442 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:06.442 Nvme0n1 : 6.00 10544.17 41.19 0.00 0.00 0.00 0.00 0.00 00:07:06.442 [2024-11-26T19:41:01.689Z] =================================================================================================================== 00:07:06.442 [2024-11-26T19:41:01.689Z] Total : 10544.17 41.19 0.00 0.00 0.00 0.00 0.00 00:07:06.442 00:07:07.405 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:07.405 Nvme0n1 : 7.00 10598.14 41.40 0.00 0.00 0.00 0.00 0.00 00:07:07.405 [2024-11-26T19:41:02.652Z] =================================================================================================================== 00:07:07.405 [2024-11-26T19:41:02.652Z] Total : 10598.14 41.40 0.00 0.00 0.00 0.00 0.00 00:07:07.405 00:07:08.378 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:08.378 Nvme0n1 : 8.00 10654.50 41.62 0.00 0.00 0.00 0.00 0.00 00:07:08.378 [2024-11-26T19:41:03.625Z] =================================================================================================================== 00:07:08.378 [2024-11-26T19:41:03.625Z] Total : 10654.50 41.62 0.00 0.00 0.00 0.00 0.00 00:07:08.378 00:07:09.307 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:09.307 Nvme0n1 : 9.00 10041.11 39.22 0.00 0.00 0.00 0.00 0.00 00:07:09.307 [2024-11-26T19:41:04.554Z] =================================================================================================================== 00:07:09.307 [2024-11-26T19:41:04.554Z] Total : 10041.11 39.22 0.00 0.00 0.00 0.00 0.00 00:07:09.307 00:07:10.240 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:10.240 Nvme0n1 : 10.00 9426.40 36.82 0.00 0.00 0.00 0.00 0.00 00:07:10.240 [2024-11-26T19:41:05.487Z] =================================================================================================================== 00:07:10.240 [2024-11-26T19:41:05.487Z] Total : 9426.40 36.82 0.00 0.00 0.00 0.00 0.00 00:07:10.240 00:07:10.240 00:07:10.240 Latency(us) 00:07:10.240 [2024-11-26T19:41:05.487Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:10.240 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:10.240 Nvme0n1 : 10.01 9427.55 36.83 0.00 0.00 13572.23 4537.11 929199.66 00:07:10.240 [2024-11-26T19:41:05.487Z] =================================================================================================================== 00:07:10.240 [2024-11-26T19:41:05.487Z] Total : 9427.55 36.83 0.00 0.00 13572.23 4537.11 929199.66 00:07:10.240 { 00:07:10.240 "results": [ 00:07:10.240 { 00:07:10.240 "job": "Nvme0n1", 00:07:10.240 "core_mask": "0x2", 00:07:10.240 "workload": "randwrite", 00:07:10.240 "status": "finished", 00:07:10.240 "queue_depth": 128, 00:07:10.240 "io_size": 4096, 00:07:10.240 "runtime": 10.012356, 00:07:10.240 "iops": 9427.551317591984, 00:07:10.240 "mibps": 36.82637233434369, 00:07:10.240 "io_failed": 0, 00:07:10.240 "io_timeout": 0, 00:07:10.240 "avg_latency_us": 13572.225392764705, 00:07:10.240 "min_latency_us": 4537.107692307693, 00:07:10.240 "max_latency_us": 929199.6553846154 00:07:10.240 } 00:07:10.240 ], 00:07:10.240 "core_count": 1 00:07:10.240 } 00:07:10.240 19:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 62521 00:07:10.240 19:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 62521 ']' 00:07:10.240 19:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 62521 00:07:10.240 19:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:10.240 19:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:10.240 19:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62521 00:07:10.240 19:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:10.240 19:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:10.240 killing process with pid 62521 00:07:10.240 19:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62521' 00:07:10.240 19:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 62521 00:07:10.240 Received shutdown signal, test time was about 10.000000 seconds 00:07:10.240 00:07:10.240 Latency(us) 00:07:10.240 [2024-11-26T19:41:05.487Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:10.240 [2024-11-26T19:41:05.487Z] =================================================================================================================== 00:07:10.240 [2024-11-26T19:41:05.487Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:10.240 19:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 62521 00:07:10.240 19:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:10.497 19:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:10.754 19:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f2897719-6bd4-4941-967e-cdb4cb74014e 00:07:10.754 19:41:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:11.011 19:41:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:11.011 19:41:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:11.011 19:41:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 62180 00:07:11.011 19:41:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 62180 00:07:11.011 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 62180 Killed "${NVMF_APP[@]}" "$@" 00:07:11.011 19:41:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:11.011 19:41:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:11.011 19:41:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:11.011 19:41:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:11.011 19:41:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:11.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.011 19:41:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=62674 00:07:11.011 19:41:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 62674 00:07:11.011 19:41:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 62674 ']' 00:07:11.011 19:41:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.011 19:41:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:11.011 19:41:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.011 19:41:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.011 19:41:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.011 19:41:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:11.011 [2024-11-26 19:41:06.119804] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:07:11.011 [2024-11-26 19:41:06.119859] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:11.011 [2024-11-26 19:41:06.254387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.268 [2024-11-26 19:41:06.286050] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:11.268 [2024-11-26 19:41:06.286217] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:11.268 [2024-11-26 19:41:06.286230] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:11.268 [2024-11-26 19:41:06.286235] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:11.268 [2024-11-26 19:41:06.286240] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:11.268 [2024-11-26 19:41:06.286466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.268 [2024-11-26 19:41:06.317024] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:11.832 19:41:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:11.832 19:41:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:11.832 19:41:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:11.832 19:41:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:11.832 19:41:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:11.832 19:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:11.832 19:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:12.089 [2024-11-26 19:41:07.204684] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:12.089 [2024-11-26 19:41:07.205036] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:12.089 [2024-11-26 19:41:07.205230] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:12.089 19:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:12.089 19:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev b9ffdbfd-2a1c-49ff-bbfa-ccb6a303ce4b 00:07:12.089 19:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=b9ffdbfd-2a1c-49ff-bbfa-ccb6a303ce4b 00:07:12.089 19:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:12.089 19:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:12.089 19:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:12.089 19:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:12.089 19:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:12.350 19:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b9ffdbfd-2a1c-49ff-bbfa-ccb6a303ce4b -t 2000 00:07:12.623 [ 00:07:12.623 { 00:07:12.623 "name": "b9ffdbfd-2a1c-49ff-bbfa-ccb6a303ce4b", 00:07:12.623 "aliases": [ 00:07:12.623 "lvs/lvol" 00:07:12.623 ], 00:07:12.623 "product_name": "Logical Volume", 00:07:12.623 "block_size": 4096, 00:07:12.623 "num_blocks": 38912, 00:07:12.623 "uuid": "b9ffdbfd-2a1c-49ff-bbfa-ccb6a303ce4b", 00:07:12.623 "assigned_rate_limits": { 00:07:12.623 "rw_ios_per_sec": 0, 00:07:12.623 "rw_mbytes_per_sec": 0, 00:07:12.623 "r_mbytes_per_sec": 0, 00:07:12.623 "w_mbytes_per_sec": 0 00:07:12.623 }, 00:07:12.623 "claimed": false, 00:07:12.623 "zoned": false, 00:07:12.623 "supported_io_types": { 00:07:12.623 "read": true, 00:07:12.623 "write": true, 00:07:12.623 "unmap": true, 00:07:12.623 "flush": false, 00:07:12.623 "reset": true, 00:07:12.623 "nvme_admin": false, 00:07:12.623 "nvme_io": false, 00:07:12.623 "nvme_io_md": false, 00:07:12.623 "write_zeroes": true, 00:07:12.623 "zcopy": false, 00:07:12.623 "get_zone_info": false, 00:07:12.623 "zone_management": false, 00:07:12.623 "zone_append": false, 00:07:12.623 "compare": false, 00:07:12.623 "compare_and_write": false, 00:07:12.623 "abort": false, 00:07:12.623 "seek_hole": true, 00:07:12.623 "seek_data": true, 00:07:12.623 "copy": false, 00:07:12.623 "nvme_iov_md": false 00:07:12.623 }, 00:07:12.623 "driver_specific": { 00:07:12.623 "lvol": { 00:07:12.623 "lvol_store_uuid": "f2897719-6bd4-4941-967e-cdb4cb74014e", 00:07:12.623 "base_bdev": "aio_bdev", 00:07:12.623 "thin_provision": false, 00:07:12.623 "num_allocated_clusters": 38, 00:07:12.623 "snapshot": false, 00:07:12.623 "clone": false, 00:07:12.623 "esnap_clone": false 00:07:12.623 } 00:07:12.623 } 00:07:12.623 } 00:07:12.623 ] 00:07:12.623 19:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:12.623 19:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f2897719-6bd4-4941-967e-cdb4cb74014e 00:07:12.623 19:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:12.623 19:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:12.623 19:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:12.623 19:41:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f2897719-6bd4-4941-967e-cdb4cb74014e 00:07:12.909 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:12.909 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:13.166 [2024-11-26 19:41:08.198723] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:13.166 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f2897719-6bd4-4941-967e-cdb4cb74014e 00:07:13.166 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:07:13.166 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f2897719-6bd4-4941-967e-cdb4cb74014e 00:07:13.166 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:13.166 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:13.166 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:13.166 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:13.166 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:13.166 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:13.166 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:13.166 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:13.166 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f2897719-6bd4-4941-967e-cdb4cb74014e 00:07:13.423 request: 00:07:13.423 { 00:07:13.423 "uuid": "f2897719-6bd4-4941-967e-cdb4cb74014e", 00:07:13.423 "method": "bdev_lvol_get_lvstores", 00:07:13.423 "req_id": 1 00:07:13.423 } 00:07:13.423 Got JSON-RPC error response 00:07:13.423 response: 00:07:13.423 { 00:07:13.423 "code": -19, 00:07:13.423 "message": "No such device" 00:07:13.423 } 00:07:13.423 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:07:13.423 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:13.423 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:13.423 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:13.423 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:13.680 aio_bdev 00:07:13.680 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b9ffdbfd-2a1c-49ff-bbfa-ccb6a303ce4b 00:07:13.680 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=b9ffdbfd-2a1c-49ff-bbfa-ccb6a303ce4b 00:07:13.680 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:13.680 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:13.680 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:13.680 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:13.680 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:13.680 19:41:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b9ffdbfd-2a1c-49ff-bbfa-ccb6a303ce4b -t 2000 00:07:13.937 [ 00:07:13.937 { 00:07:13.937 "name": "b9ffdbfd-2a1c-49ff-bbfa-ccb6a303ce4b", 00:07:13.937 "aliases": [ 00:07:13.937 "lvs/lvol" 00:07:13.937 ], 00:07:13.937 "product_name": "Logical Volume", 00:07:13.937 "block_size": 4096, 00:07:13.937 "num_blocks": 38912, 00:07:13.937 "uuid": "b9ffdbfd-2a1c-49ff-bbfa-ccb6a303ce4b", 00:07:13.937 "assigned_rate_limits": { 00:07:13.937 "rw_ios_per_sec": 0, 00:07:13.937 "rw_mbytes_per_sec": 0, 00:07:13.937 "r_mbytes_per_sec": 0, 00:07:13.937 "w_mbytes_per_sec": 0 00:07:13.937 }, 00:07:13.937 "claimed": false, 00:07:13.937 "zoned": false, 00:07:13.937 "supported_io_types": { 00:07:13.937 "read": true, 00:07:13.937 "write": true, 00:07:13.937 "unmap": true, 00:07:13.937 "flush": false, 00:07:13.937 "reset": true, 00:07:13.937 "nvme_admin": false, 00:07:13.937 "nvme_io": false, 00:07:13.937 "nvme_io_md": false, 00:07:13.937 "write_zeroes": true, 00:07:13.937 "zcopy": false, 00:07:13.937 "get_zone_info": false, 00:07:13.937 "zone_management": false, 00:07:13.937 "zone_append": false, 00:07:13.937 "compare": false, 00:07:13.937 "compare_and_write": false, 00:07:13.937 "abort": false, 00:07:13.937 "seek_hole": true, 00:07:13.937 "seek_data": true, 00:07:13.937 "copy": false, 00:07:13.937 "nvme_iov_md": false 00:07:13.937 }, 00:07:13.937 "driver_specific": { 00:07:13.937 "lvol": { 00:07:13.937 "lvol_store_uuid": "f2897719-6bd4-4941-967e-cdb4cb74014e", 00:07:13.938 "base_bdev": "aio_bdev", 00:07:13.938 "thin_provision": false, 00:07:13.938 "num_allocated_clusters": 38, 00:07:13.938 "snapshot": false, 00:07:13.938 "clone": false, 00:07:13.938 "esnap_clone": false 00:07:13.938 } 00:07:13.938 } 00:07:13.938 } 00:07:13.938 ] 00:07:13.938 19:41:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:13.938 19:41:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f2897719-6bd4-4941-967e-cdb4cb74014e 00:07:13.938 19:41:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:14.203 19:41:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:14.203 19:41:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f2897719-6bd4-4941-967e-cdb4cb74014e 00:07:14.203 19:41:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:14.474 19:41:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:14.474 19:41:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete b9ffdbfd-2a1c-49ff-bbfa-ccb6a303ce4b 00:07:14.732 19:41:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f2897719-6bd4-4941-967e-cdb4cb74014e 00:07:14.990 19:41:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:14.990 19:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:15.556 00:07:15.556 real 0m18.716s 00:07:15.556 user 0m40.580s 00:07:15.556 sys 0m5.624s 00:07:15.556 19:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.556 19:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:15.556 ************************************ 00:07:15.556 END TEST lvs_grow_dirty 00:07:15.556 ************************************ 00:07:15.556 19:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:15.556 19:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:07:15.556 19:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:07:15.556 19:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:07:15.556 19:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:15.556 19:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:07:15.556 19:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:07:15.556 19:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:07:15.556 19:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:15.556 nvmf_trace.0 00:07:15.556 19:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:07:15.556 19:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:15.556 19:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:15.556 19:41:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:16.498 19:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:16.498 19:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:16.498 19:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:16.498 19:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:16.498 rmmod nvme_tcp 00:07:16.498 rmmod nvme_fabrics 00:07:16.498 rmmod nvme_keyring 00:07:16.498 19:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:16.498 19:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:16.498 19:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:16.498 19:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 62674 ']' 00:07:16.498 19:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 62674 00:07:16.498 19:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 62674 ']' 00:07:16.498 19:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 62674 00:07:16.498 19:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:07:16.498 19:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:16.498 19:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62674 00:07:16.498 19:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:16.498 19:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:16.498 19:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62674' 00:07:16.498 killing process with pid 62674 00:07:16.498 19:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 62674 00:07:16.498 19:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 62674 00:07:16.757 19:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:16.757 19:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:16.757 19:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:16.757 19:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:16.757 19:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:16.757 19:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:16.757 19:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:16.757 19:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:16.757 19:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:16.757 19:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:16.757 19:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:16.757 19:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:16.757 19:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:16.757 19:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:16.757 19:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:16.757 19:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:16.757 19:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:16.757 19:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:16.757 19:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:16.757 19:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:16.757 19:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:16.757 19:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:16.757 19:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:16.757 19:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:16.757 19:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:16.757 19:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:16.757 19:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:07:16.757 00:07:16.757 real 0m38.632s 00:07:16.757 user 1m2.796s 00:07:16.757 sys 0m8.933s 00:07:16.757 19:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.757 19:41:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:16.757 ************************************ 00:07:16.757 END TEST nvmf_lvs_grow 00:07:16.757 ************************************ 00:07:16.757 19:41:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:16.757 19:41:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:16.757 19:41:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.757 19:41:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:16.757 ************************************ 00:07:16.757 START TEST nvmf_bdev_io_wait 00:07:16.757 ************************************ 00:07:16.757 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:17.017 * Looking for test storage... 00:07:17.017 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:17.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.017 --rc genhtml_branch_coverage=1 00:07:17.017 --rc genhtml_function_coverage=1 00:07:17.017 --rc genhtml_legend=1 00:07:17.017 --rc geninfo_all_blocks=1 00:07:17.017 --rc geninfo_unexecuted_blocks=1 00:07:17.017 00:07:17.017 ' 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:17.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.017 --rc genhtml_branch_coverage=1 00:07:17.017 --rc genhtml_function_coverage=1 00:07:17.017 --rc genhtml_legend=1 00:07:17.017 --rc geninfo_all_blocks=1 00:07:17.017 --rc geninfo_unexecuted_blocks=1 00:07:17.017 00:07:17.017 ' 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:17.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.017 --rc genhtml_branch_coverage=1 00:07:17.017 --rc genhtml_function_coverage=1 00:07:17.017 --rc genhtml_legend=1 00:07:17.017 --rc geninfo_all_blocks=1 00:07:17.017 --rc geninfo_unexecuted_blocks=1 00:07:17.017 00:07:17.017 ' 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:17.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.017 --rc genhtml_branch_coverage=1 00:07:17.017 --rc genhtml_function_coverage=1 00:07:17.017 --rc genhtml_legend=1 00:07:17.017 --rc geninfo_all_blocks=1 00:07:17.017 --rc geninfo_unexecuted_blocks=1 00:07:17.017 00:07:17.017 ' 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=91838eb1-5852-43eb-90b2-09876f360ab2 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:17.017 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:17.017 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:17.018 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:17.018 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:17.018 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:17.018 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:17.018 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:17.018 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:17.018 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:17.018 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:17.018 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:17.018 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:17.018 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:17.018 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:17.018 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:17.018 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:17.018 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:17.018 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:17.018 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:17.018 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:17.018 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:17.018 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:17.018 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:17.018 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:17.018 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:17.018 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:17.018 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:17.018 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:17.018 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:17.018 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:17.018 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:17.018 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:17.018 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:17.018 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:17.018 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:17.018 Cannot find device "nvmf_init_br" 00:07:17.018 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:07:17.018 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:17.018 Cannot find device "nvmf_init_br2" 00:07:17.018 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:07:17.018 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:17.018 Cannot find device "nvmf_tgt_br" 00:07:17.018 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:07:17.018 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:17.018 Cannot find device "nvmf_tgt_br2" 00:07:17.018 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:07:17.018 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:17.018 Cannot find device "nvmf_init_br" 00:07:17.018 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:07:17.018 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:17.018 Cannot find device "nvmf_init_br2" 00:07:17.018 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:07:17.018 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:17.018 Cannot find device "nvmf_tgt_br" 00:07:17.018 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:07:17.018 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:17.018 Cannot find device "nvmf_tgt_br2" 00:07:17.018 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:07:17.018 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:17.018 Cannot find device "nvmf_br" 00:07:17.018 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:07:17.018 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:17.018 Cannot find device "nvmf_init_if" 00:07:17.018 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:07:17.018 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:17.276 Cannot find device "nvmf_init_if2" 00:07:17.276 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:07:17.276 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:17.276 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:17.276 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:07:17.276 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:17.276 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:17.276 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:07:17.276 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:17.276 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:17.276 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:17.276 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:17.276 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:17.276 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:17.276 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:17.276 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:17.276 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:17.276 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:17.276 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:17.276 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:17.276 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:17.276 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:17.276 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:17.276 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:17.276 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:17.276 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:17.276 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:17.276 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:17.276 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:17.276 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:17.276 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:17.276 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:17.276 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:17.276 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:17.276 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:17.276 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:17.276 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:17.276 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:17.276 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:17.276 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:17.276 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:17.276 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:17.276 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:07:17.276 00:07:17.276 --- 10.0.0.3 ping statistics --- 00:07:17.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:17.276 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:07:17.276 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:17.276 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:17.276 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:07:17.276 00:07:17.276 --- 10.0.0.4 ping statistics --- 00:07:17.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:17.276 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:07:17.276 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:17.276 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:17.276 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.013 ms 00:07:17.276 00:07:17.276 --- 10.0.0.1 ping statistics --- 00:07:17.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:17.277 rtt min/avg/max/mdev = 0.013/0.013/0.013/0.000 ms 00:07:17.277 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:17.277 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:17.277 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:07:17.277 00:07:17.277 --- 10.0.0.2 ping statistics --- 00:07:17.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:17.277 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:07:17.277 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:17.277 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:07:17.277 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:17.277 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:17.277 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:17.277 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:17.277 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:17.277 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:17.277 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:17.277 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:17.277 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:17.277 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:17.277 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:17.277 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=63053 00:07:17.277 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 63053 00:07:17.277 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 63053 ']' 00:07:17.277 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.277 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:17.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.277 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.277 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:17.277 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:17.277 19:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:17.277 [2024-11-26 19:41:12.494001] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:07:17.277 [2024-11-26 19:41:12.494059] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:17.535 [2024-11-26 19:41:12.636655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:17.535 [2024-11-26 19:41:12.674118] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:17.535 [2024-11-26 19:41:12.674164] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:17.535 [2024-11-26 19:41:12.674170] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:17.535 [2024-11-26 19:41:12.674176] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:17.535 [2024-11-26 19:41:12.674180] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:17.535 [2024-11-26 19:41:12.674916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.535 [2024-11-26 19:41:12.674992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:17.535 [2024-11-26 19:41:12.675056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.535 [2024-11-26 19:41:12.675057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:18.467 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.467 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:07:18.467 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:18.467 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:18.467 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:18.467 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:18.467 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:18.467 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.467 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:18.467 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.467 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:18.467 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.467 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:18.467 [2024-11-26 19:41:13.434592] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:18.467 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.467 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:18.467 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.467 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:18.467 [2024-11-26 19:41:13.449197] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:18.467 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.467 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:18.467 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.467 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:18.467 Malloc0 00:07:18.467 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.467 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:18.467 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.467 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:18.467 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.467 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:18.467 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.467 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:18.467 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.467 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:07:18.467 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.467 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:18.467 [2024-11-26 19:41:13.495986] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:18.467 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.467 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=63088 00:07:18.467 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:18.467 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=63090 00:07:18.467 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:18.467 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:18.467 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:18.467 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:18.467 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:18.467 { 00:07:18.468 "params": { 00:07:18.468 "name": "Nvme$subsystem", 00:07:18.468 "trtype": "$TEST_TRANSPORT", 00:07:18.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:18.468 "adrfam": "ipv4", 00:07:18.468 "trsvcid": "$NVMF_PORT", 00:07:18.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:18.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:18.468 "hdgst": ${hdgst:-false}, 00:07:18.468 "ddgst": ${ddgst:-false} 00:07:18.468 }, 00:07:18.468 "method": "bdev_nvme_attach_controller" 00:07:18.468 } 00:07:18.468 EOF 00:07:18.468 )") 00:07:18.468 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=63091 00:07:18.468 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:18.468 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:18.468 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=63095 00:07:18.468 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:18.468 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:18.468 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:18.468 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:18.468 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:18.468 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:18.468 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:18.468 { 00:07:18.468 "params": { 00:07:18.468 "name": "Nvme$subsystem", 00:07:18.468 "trtype": "$TEST_TRANSPORT", 00:07:18.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:18.468 "adrfam": "ipv4", 00:07:18.468 "trsvcid": "$NVMF_PORT", 00:07:18.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:18.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:18.468 "hdgst": ${hdgst:-false}, 00:07:18.468 "ddgst": ${ddgst:-false} 00:07:18.468 }, 00:07:18.468 "method": "bdev_nvme_attach_controller" 00:07:18.468 } 00:07:18.468 EOF 00:07:18.468 )") 00:07:18.468 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:18.468 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:18.468 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:18.468 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:18.468 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:18.468 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:18.468 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:18.468 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:18.468 { 00:07:18.468 "params": { 00:07:18.468 "name": "Nvme$subsystem", 00:07:18.468 "trtype": "$TEST_TRANSPORT", 00:07:18.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:18.468 "adrfam": "ipv4", 00:07:18.468 "trsvcid": "$NVMF_PORT", 00:07:18.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:18.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:18.468 "hdgst": ${hdgst:-false}, 00:07:18.468 "ddgst": ${ddgst:-false} 00:07:18.468 }, 00:07:18.468 "method": "bdev_nvme_attach_controller" 00:07:18.468 } 00:07:18.468 EOF 00:07:18.468 )") 00:07:18.468 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:18.468 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:18.468 "params": { 00:07:18.468 "name": "Nvme1", 00:07:18.468 "trtype": "tcp", 00:07:18.468 "traddr": "10.0.0.3", 00:07:18.468 "adrfam": "ipv4", 00:07:18.468 "trsvcid": "4420", 00:07:18.468 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:18.468 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:18.468 "hdgst": false, 00:07:18.468 "ddgst": false 00:07:18.468 }, 00:07:18.468 "method": "bdev_nvme_attach_controller" 00:07:18.468 }' 00:07:18.468 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:18.468 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:18.468 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:18.468 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:18.468 "params": { 00:07:18.468 "name": "Nvme1", 00:07:18.468 "trtype": "tcp", 00:07:18.468 "traddr": "10.0.0.3", 00:07:18.468 "adrfam": "ipv4", 00:07:18.468 "trsvcid": "4420", 00:07:18.468 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:18.468 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:18.468 "hdgst": false, 00:07:18.468 "ddgst": false 00:07:18.468 }, 00:07:18.468 "method": "bdev_nvme_attach_controller" 00:07:18.468 }' 00:07:18.468 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:18.468 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:18.468 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:18.468 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:18.468 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:18.468 "params": { 00:07:18.468 "name": "Nvme1", 00:07:18.468 "trtype": "tcp", 00:07:18.468 "traddr": "10.0.0.3", 00:07:18.468 "adrfam": "ipv4", 00:07:18.468 "trsvcid": "4420", 00:07:18.468 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:18.468 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:18.468 "hdgst": false, 00:07:18.468 "ddgst": false 00:07:18.468 }, 00:07:18.468 "method": "bdev_nvme_attach_controller" 00:07:18.468 }' 00:07:18.468 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:18.468 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:18.468 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:18.468 { 00:07:18.468 "params": { 00:07:18.468 "name": "Nvme$subsystem", 00:07:18.468 "trtype": "$TEST_TRANSPORT", 00:07:18.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:18.468 "adrfam": "ipv4", 00:07:18.468 "trsvcid": "$NVMF_PORT", 00:07:18.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:18.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:18.468 "hdgst": ${hdgst:-false}, 00:07:18.468 "ddgst": ${ddgst:-false} 00:07:18.468 }, 00:07:18.468 "method": "bdev_nvme_attach_controller" 00:07:18.468 } 00:07:18.468 EOF 00:07:18.468 )") 00:07:18.468 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:18.468 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:18.468 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:18.468 19:41:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:18.468 "params": { 00:07:18.468 "name": "Nvme1", 00:07:18.468 "trtype": "tcp", 00:07:18.468 "traddr": "10.0.0.3", 00:07:18.468 "adrfam": "ipv4", 00:07:18.468 "trsvcid": "4420", 00:07:18.468 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:18.468 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:18.468 "hdgst": false, 00:07:18.468 "ddgst": false 00:07:18.468 }, 00:07:18.468 "method": "bdev_nvme_attach_controller" 00:07:18.468 }' 00:07:18.468 [2024-11-26 19:41:13.540830] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:07:18.468 [2024-11-26 19:41:13.540983] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:18.468 [2024-11-26 19:41:13.550002] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:07:18.468 [2024-11-26 19:41:13.550051] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:18.468 [2024-11-26 19:41:13.552269] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:07:18.468 [2024-11-26 19:41:13.552317] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:18.468 [2024-11-26 19:41:13.559956] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:07:18.468 [2024-11-26 19:41:13.560012] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:18.726 [2024-11-26 19:41:13.714003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.726 [2024-11-26 19:41:13.742113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:18.726 [2024-11-26 19:41:13.754580] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:18.726 [2024-11-26 19:41:13.755394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.726 [2024-11-26 19:41:13.784255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:18.726 [2024-11-26 19:41:13.796761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.726 [2024-11-26 19:41:13.796852] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:18.726 [2024-11-26 19:41:13.826019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:18.726 [2024-11-26 19:41:13.838501] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:18.726 [2024-11-26 19:41:13.844733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.726 Running I/O for 1 seconds... 00:07:18.726 [2024-11-26 19:41:13.872907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:18.726 [2024-11-26 19:41:13.885424] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:18.726 Running I/O for 1 seconds... 00:07:18.726 Running I/O for 1 seconds... 00:07:18.983 Running I/O for 1 seconds... 00:07:19.916 7902.00 IOPS, 30.87 MiB/s 00:07:19.916 Latency(us) 00:07:19.916 [2024-11-26T19:41:15.163Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:19.917 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:19.917 Nvme1n1 : 1.02 7907.51 30.89 0.00 0.00 16104.22 8065.97 28029.24 00:07:19.917 [2024-11-26T19:41:15.164Z] =================================================================================================================== 00:07:19.917 [2024-11-26T19:41:15.164Z] Total : 7907.51 30.89 0.00 0.00 16104.22 8065.97 28029.24 00:07:19.917 12942.00 IOPS, 50.55 MiB/s 00:07:19.917 Latency(us) 00:07:19.917 [2024-11-26T19:41:15.164Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:19.917 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:19.917 Nvme1n1 : 1.01 13010.31 50.82 0.00 0.00 9806.66 5142.06 22181.42 00:07:19.917 [2024-11-26T19:41:15.164Z] =================================================================================================================== 00:07:19.917 [2024-11-26T19:41:15.164Z] Total : 13010.31 50.82 0.00 0.00 9806.66 5142.06 22181.42 00:07:19.917 174936.00 IOPS, 683.34 MiB/s 00:07:19.917 Latency(us) 00:07:19.917 [2024-11-26T19:41:15.164Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:19.917 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:19.917 Nvme1n1 : 1.00 174553.00 681.85 0.00 0.00 729.22 343.43 2180.33 00:07:19.917 [2024-11-26T19:41:15.164Z] =================================================================================================================== 00:07:19.917 [2024-11-26T19:41:15.164Z] Total : 174553.00 681.85 0.00 0.00 729.22 343.43 2180.33 00:07:19.917 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 63088 00:07:19.917 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 63090 00:07:19.917 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 63091 00:07:19.917 7846.00 IOPS, 30.65 MiB/s 00:07:19.917 Latency(us) 00:07:19.917 [2024-11-26T19:41:15.164Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:19.917 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:19.917 Nvme1n1 : 1.01 7959.31 31.09 0.00 0.00 16032.21 5368.91 36498.51 00:07:19.917 [2024-11-26T19:41:15.164Z] =================================================================================================================== 00:07:19.917 [2024-11-26T19:41:15.164Z] Total : 7959.31 31.09 0.00 0.00 16032.21 5368.91 36498.51 00:07:19.917 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 63095 00:07:19.917 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:19.917 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.917 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:19.917 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.917 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:19.917 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:19.917 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:19.917 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:19.917 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:19.917 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:19.917 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:19.917 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:19.917 rmmod nvme_tcp 00:07:20.177 rmmod nvme_fabrics 00:07:20.177 rmmod nvme_keyring 00:07:20.177 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:20.177 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:20.177 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:20.177 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 63053 ']' 00:07:20.177 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 63053 00:07:20.177 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 63053 ']' 00:07:20.177 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 63053 00:07:20.177 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:07:20.177 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:20.177 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63053 00:07:20.177 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:20.177 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:20.177 killing process with pid 63053 00:07:20.177 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63053' 00:07:20.177 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 63053 00:07:20.177 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 63053 00:07:20.177 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:20.177 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:20.177 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:20.177 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:20.177 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:07:20.177 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:20.177 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:07:20.177 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:20.177 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:20.177 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:20.177 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:20.177 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:20.177 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:20.177 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:20.177 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:20.177 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:20.177 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:20.177 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:20.436 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:20.436 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:20.436 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:20.437 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:20.437 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:20.437 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:20.437 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:20.437 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:20.437 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:07:20.437 00:07:20.437 real 0m3.535s 00:07:20.437 user 0m15.142s 00:07:20.437 sys 0m1.561s 00:07:20.437 ************************************ 00:07:20.437 END TEST nvmf_bdev_io_wait 00:07:20.437 ************************************ 00:07:20.437 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.437 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:20.437 19:41:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:20.437 19:41:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:20.437 19:41:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.437 19:41:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:20.437 ************************************ 00:07:20.437 START TEST nvmf_queue_depth 00:07:20.437 ************************************ 00:07:20.437 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:20.437 * Looking for test storage... 00:07:20.437 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:20.437 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:20.437 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:07:20.437 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:20.697 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:20.697 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:20.697 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:20.697 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:20.697 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:20.697 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:20.697 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:20.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.698 --rc genhtml_branch_coverage=1 00:07:20.698 --rc genhtml_function_coverage=1 00:07:20.698 --rc genhtml_legend=1 00:07:20.698 --rc geninfo_all_blocks=1 00:07:20.698 --rc geninfo_unexecuted_blocks=1 00:07:20.698 00:07:20.698 ' 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:20.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.698 --rc genhtml_branch_coverage=1 00:07:20.698 --rc genhtml_function_coverage=1 00:07:20.698 --rc genhtml_legend=1 00:07:20.698 --rc geninfo_all_blocks=1 00:07:20.698 --rc geninfo_unexecuted_blocks=1 00:07:20.698 00:07:20.698 ' 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:20.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.698 --rc genhtml_branch_coverage=1 00:07:20.698 --rc genhtml_function_coverage=1 00:07:20.698 --rc genhtml_legend=1 00:07:20.698 --rc geninfo_all_blocks=1 00:07:20.698 --rc geninfo_unexecuted_blocks=1 00:07:20.698 00:07:20.698 ' 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:20.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.698 --rc genhtml_branch_coverage=1 00:07:20.698 --rc genhtml_function_coverage=1 00:07:20.698 --rc genhtml_legend=1 00:07:20.698 --rc geninfo_all_blocks=1 00:07:20.698 --rc geninfo_unexecuted_blocks=1 00:07:20.698 00:07:20.698 ' 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=91838eb1-5852-43eb-90b2-09876f360ab2 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:20.698 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:20.698 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:20.699 Cannot find device "nvmf_init_br" 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:20.699 Cannot find device "nvmf_init_br2" 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:20.699 Cannot find device "nvmf_tgt_br" 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:20.699 Cannot find device "nvmf_tgt_br2" 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:20.699 Cannot find device "nvmf_init_br" 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:20.699 Cannot find device "nvmf_init_br2" 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:20.699 Cannot find device "nvmf_tgt_br" 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:20.699 Cannot find device "nvmf_tgt_br2" 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:20.699 Cannot find device "nvmf_br" 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:20.699 Cannot find device "nvmf_init_if" 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:20.699 Cannot find device "nvmf_init_if2" 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:20.699 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:20.699 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:20.699 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:20.957 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:20.957 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:20.957 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:20.957 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:20.957 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:20.957 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:20.957 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:20.957 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:20.957 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:20.958 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:20.958 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:20.958 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:20.958 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:20.958 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:20.958 19:41:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:20.958 19:41:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:20.958 19:41:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:20.958 19:41:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:20.958 19:41:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:20.958 19:41:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:20.958 19:41:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:20.958 19:41:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:20.958 19:41:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:20.958 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:20.958 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:07:20.958 00:07:20.958 --- 10.0.0.3 ping statistics --- 00:07:20.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:20.958 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:07:20.958 19:41:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:20.958 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:20.958 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.033 ms 00:07:20.958 00:07:20.958 --- 10.0.0.4 ping statistics --- 00:07:20.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:20.958 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:07:20.958 19:41:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:20.958 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:20.958 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:07:20.958 00:07:20.958 --- 10.0.0.1 ping statistics --- 00:07:20.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:20.958 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:07:20.958 19:41:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:20.958 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:20.958 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:07:20.958 00:07:20.958 --- 10.0.0.2 ping statistics --- 00:07:20.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:20.958 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:07:20.958 19:41:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:20.958 19:41:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:07:20.958 19:41:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:20.958 19:41:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:20.958 19:41:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:20.958 19:41:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:20.958 19:41:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:20.958 19:41:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:20.958 19:41:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:20.958 19:41:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:20.958 19:41:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:20.958 19:41:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:20.958 19:41:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:20.958 19:41:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=63351 00:07:20.958 19:41:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 63351 00:07:20.958 19:41:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 63351 ']' 00:07:20.958 19:41:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.958 19:41:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:20.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.958 19:41:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:20.958 19:41:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.958 19:41:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:20.958 19:41:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:20.958 [2024-11-26 19:41:16.106662] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:07:20.958 [2024-11-26 19:41:16.106744] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:21.215 [2024-11-26 19:41:16.248928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.215 [2024-11-26 19:41:16.285662] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:21.215 [2024-11-26 19:41:16.285716] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:21.215 [2024-11-26 19:41:16.285724] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:21.215 [2024-11-26 19:41:16.285730] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:21.215 [2024-11-26 19:41:16.285735] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:21.215 [2024-11-26 19:41:16.286012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.215 [2024-11-26 19:41:16.319500] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:21.783 19:41:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.783 19:41:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:21.783 19:41:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:21.783 19:41:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:21.783 19:41:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:21.783 19:41:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:21.783 19:41:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:21.783 19:41:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.783 19:41:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:21.783 [2024-11-26 19:41:16.986358] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:21.783 19:41:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.783 19:41:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:21.783 19:41:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.783 19:41:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:21.783 Malloc0 00:07:21.783 19:41:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.783 19:41:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:21.783 19:41:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.783 19:41:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:21.783 19:41:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.783 19:41:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:21.783 19:41:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.783 19:41:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:21.783 19:41:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.783 19:41:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:07:21.783 19:41:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.783 19:41:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:22.041 [2024-11-26 19:41:17.030153] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:22.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:22.041 19:41:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.042 19:41:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=63383 00:07:22.042 19:41:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:22.042 19:41:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 63383 /var/tmp/bdevperf.sock 00:07:22.042 19:41:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 63383 ']' 00:07:22.042 19:41:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:22.042 19:41:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:22.042 19:41:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:22.042 19:41:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:22.042 19:41:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:22.042 19:41:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:22.042 [2024-11-26 19:41:17.068699] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:07:22.042 [2024-11-26 19:41:17.068785] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63383 ] 00:07:22.042 [2024-11-26 19:41:17.202478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.042 [2024-11-26 19:41:17.239107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.042 [2024-11-26 19:41:17.271185] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:22.982 19:41:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:22.982 19:41:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:22.982 19:41:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:07:22.982 19:41:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.982 19:41:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:22.982 NVMe0n1 00:07:22.982 19:41:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.982 19:41:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:22.982 Running I/O for 10 seconds... 00:07:25.302 7163.00 IOPS, 27.98 MiB/s [2024-11-26T19:41:21.485Z] 7680.00 IOPS, 30.00 MiB/s [2024-11-26T19:41:22.423Z] 7943.67 IOPS, 31.03 MiB/s [2024-11-26T19:41:23.365Z] 8206.25 IOPS, 32.06 MiB/s [2024-11-26T19:41:24.310Z] 8374.40 IOPS, 32.71 MiB/s [2024-11-26T19:41:25.251Z] 8406.50 IOPS, 32.84 MiB/s [2024-11-26T19:41:26.225Z] 8504.29 IOPS, 33.22 MiB/s [2024-11-26T19:41:27.160Z] 8854.50 IOPS, 34.59 MiB/s [2024-11-26T19:41:28.534Z] 9130.89 IOPS, 35.67 MiB/s [2024-11-26T19:41:28.534Z] 9305.60 IOPS, 36.35 MiB/s 00:07:33.287 Latency(us) 00:07:33.287 [2024-11-26T19:41:28.534Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:33.287 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:07:33.287 Verification LBA range: start 0x0 length 0x4000 00:07:33.287 NVMe0n1 : 10.08 9330.96 36.45 0.00 0.00 109248.99 23290.49 89128.96 00:07:33.287 [2024-11-26T19:41:28.534Z] =================================================================================================================== 00:07:33.287 [2024-11-26T19:41:28.534Z] Total : 9330.96 36.45 0.00 0.00 109248.99 23290.49 89128.96 00:07:33.287 { 00:07:33.287 "results": [ 00:07:33.287 { 00:07:33.287 "job": "NVMe0n1", 00:07:33.287 "core_mask": "0x1", 00:07:33.287 "workload": "verify", 00:07:33.287 "status": "finished", 00:07:33.287 "verify_range": { 00:07:33.287 "start": 0, 00:07:33.287 "length": 16384 00:07:33.287 }, 00:07:33.287 "queue_depth": 1024, 00:07:33.287 "io_size": 4096, 00:07:33.287 "runtime": 10.082568, 00:07:33.287 "iops": 9330.956161168464, 00:07:33.287 "mibps": 36.44904750456431, 00:07:33.287 "io_failed": 0, 00:07:33.287 "io_timeout": 0, 00:07:33.287 "avg_latency_us": 109248.98622291992, 00:07:33.287 "min_latency_us": 23290.486153846156, 00:07:33.287 "max_latency_us": 89128.96 00:07:33.287 } 00:07:33.287 ], 00:07:33.287 "core_count": 1 00:07:33.287 } 00:07:33.287 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 63383 00:07:33.287 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 63383 ']' 00:07:33.287 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 63383 00:07:33.287 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:07:33.287 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:33.287 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63383 00:07:33.287 killing process with pid 63383 00:07:33.287 Received shutdown signal, test time was about 10.000000 seconds 00:07:33.287 00:07:33.287 Latency(us) 00:07:33.287 [2024-11-26T19:41:28.534Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:33.287 [2024-11-26T19:41:28.534Z] =================================================================================================================== 00:07:33.287 [2024-11-26T19:41:28.534Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:33.287 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:33.287 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:33.287 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63383' 00:07:33.287 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 63383 00:07:33.287 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 63383 00:07:33.287 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:07:33.287 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:07:33.287 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:33.287 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:07:33.287 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:33.287 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:07:33.287 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:33.287 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:33.287 rmmod nvme_tcp 00:07:33.287 rmmod nvme_fabrics 00:07:33.287 rmmod nvme_keyring 00:07:33.287 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:33.287 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:07:33.287 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:07:33.287 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 63351 ']' 00:07:33.287 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 63351 00:07:33.287 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 63351 ']' 00:07:33.287 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 63351 00:07:33.287 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:07:33.287 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:33.287 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63351 00:07:33.287 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:33.287 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:33.287 killing process with pid 63351 00:07:33.287 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63351' 00:07:33.287 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 63351 00:07:33.287 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 63351 00:07:33.546 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:33.546 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:33.546 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:33.546 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:07:33.546 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:07:33.546 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:33.546 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:07:33.546 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:33.546 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:33.546 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:33.546 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:33.546 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:33.546 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:33.546 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:33.546 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:33.546 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:33.546 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:33.546 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:33.546 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:33.546 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:33.546 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:33.546 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:33.804 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:33.804 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.804 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:33.804 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.804 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:07:33.804 00:07:33.804 real 0m13.252s 00:07:33.804 user 0m23.176s 00:07:33.804 sys 0m1.691s 00:07:33.804 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.804 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:33.804 ************************************ 00:07:33.804 END TEST nvmf_queue_depth 00:07:33.804 ************************************ 00:07:33.804 19:41:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:33.804 19:41:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:33.804 19:41:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.804 19:41:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:33.804 ************************************ 00:07:33.804 START TEST nvmf_target_multipath 00:07:33.804 ************************************ 00:07:33.804 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:33.804 * Looking for test storage... 00:07:33.804 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:33.804 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:33.804 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:33.804 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:07:33.804 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:33.804 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:33.804 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:33.804 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:33.804 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:07:33.804 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:07:33.804 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:07:33.804 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:07:33.804 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:07:33.804 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:07:33.804 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:07:33.804 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:33.804 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:07:33.804 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:07:33.804 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:33.804 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:33.804 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:07:33.804 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:07:33.804 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:33.804 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:07:33.804 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:07:33.804 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:07:33.804 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:07:33.804 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:33.804 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:07:33.804 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:07:33.804 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:33.804 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:33.804 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:07:33.804 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:33.804 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:33.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.805 --rc genhtml_branch_coverage=1 00:07:33.805 --rc genhtml_function_coverage=1 00:07:33.805 --rc genhtml_legend=1 00:07:33.805 --rc geninfo_all_blocks=1 00:07:33.805 --rc geninfo_unexecuted_blocks=1 00:07:33.805 00:07:33.805 ' 00:07:33.805 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:33.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.805 --rc genhtml_branch_coverage=1 00:07:33.805 --rc genhtml_function_coverage=1 00:07:33.805 --rc genhtml_legend=1 00:07:33.805 --rc geninfo_all_blocks=1 00:07:33.805 --rc geninfo_unexecuted_blocks=1 00:07:33.805 00:07:33.805 ' 00:07:33.805 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:33.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.805 --rc genhtml_branch_coverage=1 00:07:33.805 --rc genhtml_function_coverage=1 00:07:33.805 --rc genhtml_legend=1 00:07:33.805 --rc geninfo_all_blocks=1 00:07:33.805 --rc geninfo_unexecuted_blocks=1 00:07:33.805 00:07:33.805 ' 00:07:33.805 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:33.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.805 --rc genhtml_branch_coverage=1 00:07:33.805 --rc genhtml_function_coverage=1 00:07:33.805 --rc genhtml_legend=1 00:07:33.805 --rc geninfo_all_blocks=1 00:07:33.805 --rc geninfo_unexecuted_blocks=1 00:07:33.805 00:07:33.805 ' 00:07:33.805 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:33.805 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:07:33.805 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:33.805 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:33.805 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:33.805 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:33.805 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:33.805 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:33.805 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:33.805 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:33.805 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:33.805 19:41:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:33.805 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:07:33.805 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=91838eb1-5852-43eb-90b2-09876f360ab2 00:07:33.805 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:33.805 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:33.805 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:33.805 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:33.805 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:33.805 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:07:33.805 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:33.805 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:33.805 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:33.805 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.805 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.805 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.805 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:07:33.805 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.805 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:07:33.805 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:33.805 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:33.805 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:33.805 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:33.805 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:33.805 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:33.805 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:33.805 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:33.806 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:33.806 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:33.806 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:33.806 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:33.806 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:07:33.806 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:33.806 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:07:33.806 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:33.806 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:33.806 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:33.806 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:33.806 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:33.806 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.806 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:33.806 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.806 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:33.806 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:33.806 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:33.806 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:33.806 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:33.806 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:33.806 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:33.806 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:33.806 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:33.806 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:33.806 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:33.806 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:33.806 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:33.806 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:33.806 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:33.806 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:33.806 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:33.806 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:33.806 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:33.806 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:33.806 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:33.806 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:33.806 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:33.806 Cannot find device "nvmf_init_br" 00:07:33.806 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:07:33.806 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:33.806 Cannot find device "nvmf_init_br2" 00:07:33.806 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:07:33.806 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:34.064 Cannot find device "nvmf_tgt_br" 00:07:34.064 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:07:34.064 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:34.064 Cannot find device "nvmf_tgt_br2" 00:07:34.064 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:07:34.064 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:34.064 Cannot find device "nvmf_init_br" 00:07:34.064 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:07:34.064 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:34.064 Cannot find device "nvmf_init_br2" 00:07:34.064 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:07:34.064 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:34.064 Cannot find device "nvmf_tgt_br" 00:07:34.064 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:07:34.064 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:34.064 Cannot find device "nvmf_tgt_br2" 00:07:34.064 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:07:34.064 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:34.064 Cannot find device "nvmf_br" 00:07:34.064 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:07:34.064 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:34.064 Cannot find device "nvmf_init_if" 00:07:34.064 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:07:34.064 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:34.064 Cannot find device "nvmf_init_if2" 00:07:34.064 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:07:34.064 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:34.064 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:34.064 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:07:34.064 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:34.064 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:34.064 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:07:34.064 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:34.064 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:34.064 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:34.064 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:34.064 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:34.064 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:34.064 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:34.064 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:34.064 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:34.064 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:34.064 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:34.064 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:34.064 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:34.064 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:34.064 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:34.064 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:34.064 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:34.064 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:34.064 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:34.064 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:34.064 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:34.064 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:34.064 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:34.064 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:34.064 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:34.064 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:34.064 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:34.064 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:34.064 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:34.064 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:34.064 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:34.064 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:34.064 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:34.064 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:34.064 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:07:34.064 00:07:34.064 --- 10.0.0.3 ping statistics --- 00:07:34.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.064 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:07:34.064 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:34.064 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:34.064 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:07:34.064 00:07:34.064 --- 10.0.0.4 ping statistics --- 00:07:34.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.064 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:07:34.064 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:34.322 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:34.322 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:07:34.322 00:07:34.322 --- 10.0.0.1 ping statistics --- 00:07:34.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.322 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:07:34.322 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:34.322 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:34.322 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:07:34.322 00:07:34.322 --- 10.0.0.2 ping statistics --- 00:07:34.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.322 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:07:34.322 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:34.322 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:07:34.322 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:34.322 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:34.322 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:34.322 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:34.322 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:34.322 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:34.322 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:34.322 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:07:34.322 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:07:34.323 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:07:34.323 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:34.323 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:34.323 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:34.323 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=63754 00:07:34.323 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:34.323 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 63754 00:07:34.323 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 63754 ']' 00:07:34.323 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.323 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:34.323 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.323 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:34.323 19:41:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:34.323 [2024-11-26 19:41:29.377991] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:07:34.323 [2024-11-26 19:41:29.378051] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:34.323 [2024-11-26 19:41:29.519596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:34.323 [2024-11-26 19:41:29.558354] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:34.323 [2024-11-26 19:41:29.558394] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:34.323 [2024-11-26 19:41:29.558400] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:34.323 [2024-11-26 19:41:29.558405] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:34.323 [2024-11-26 19:41:29.558409] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:34.323 [2024-11-26 19:41:29.559145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.323 [2024-11-26 19:41:29.559465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:34.323 [2024-11-26 19:41:29.559467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.323 [2024-11-26 19:41:29.559516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:34.580 [2024-11-26 19:41:29.593358] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:35.145 19:41:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:35.145 19:41:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:07:35.145 19:41:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:35.145 19:41:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:35.145 19:41:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:35.145 19:41:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:35.145 19:41:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:35.402 [2024-11-26 19:41:30.478367] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:35.402 19:41:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:07:35.660 Malloc0 00:07:35.660 19:41:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:07:35.918 19:41:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:36.176 19:41:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:07:36.176 [2024-11-26 19:41:31.325324] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:36.176 19:41:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:07:36.433 [2024-11-26 19:41:31.493479] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:07:36.433 19:41:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid=91838eb1-5852-43eb-90b2-09876f360ab2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:07:36.433 19:41:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid=91838eb1-5852-43eb-90b2-09876f360ab2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:07:36.691 19:41:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:07:36.691 19:41:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:07:36.691 19:41:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:07:36.691 19:41:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:07:36.691 19:41:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:07:38.591 19:41:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:07:38.591 19:41:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:07:38.591 19:41:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:07:38.591 19:41:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:07:38.591 19:41:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:07:38.591 19:41:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:07:38.591 19:41:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:07:38.591 19:41:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:07:38.591 19:41:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:07:38.591 19:41:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:38.591 19:41:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:07:38.591 19:41:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:07:38.591 19:41:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:07:38.591 19:41:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:07:38.591 19:41:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:07:38.591 19:41:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:07:38.591 19:41:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:07:38.591 19:41:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:07:38.591 19:41:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:07:38.591 19:41:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:07:38.591 19:41:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:07:38.591 19:41:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:38.591 19:41:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:07:38.591 19:41:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:07:38.591 19:41:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:07:38.591 19:41:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:07:38.591 19:41:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:07:38.591 19:41:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:38.591 19:41:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:07:38.591 19:41:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:07:38.591 19:41:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:07:38.591 19:41:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:07:38.591 19:41:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=63838 00:07:38.591 19:41:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:07:38.591 19:41:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:07:38.591 [global] 00:07:38.591 thread=1 00:07:38.591 invalidate=1 00:07:38.591 rw=randrw 00:07:38.591 time_based=1 00:07:38.591 runtime=6 00:07:38.591 ioengine=libaio 00:07:38.591 direct=1 00:07:38.591 bs=4096 00:07:38.591 iodepth=128 00:07:38.591 norandommap=0 00:07:38.591 numjobs=1 00:07:38.591 00:07:38.591 verify_dump=1 00:07:38.591 verify_backlog=512 00:07:38.591 verify_state_save=0 00:07:38.591 do_verify=1 00:07:38.591 verify=crc32c-intel 00:07:38.591 [job0] 00:07:38.591 filename=/dev/nvme0n1 00:07:38.591 Could not set queue depth (nvme0n1) 00:07:38.849 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:07:38.849 fio-3.35 00:07:38.849 Starting 1 thread 00:07:39.784 19:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:07:39.784 19:41:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:07:40.042 19:41:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:07:40.042 19:41:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:07:40.042 19:41:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:40.042 19:41:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:07:40.042 19:41:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:07:40.042 19:41:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:07:40.042 19:41:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:07:40.042 19:41:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:07:40.042 19:41:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:40.042 19:41:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:07:40.042 19:41:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:07:40.042 19:41:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:07:40.042 19:41:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:07:40.300 19:41:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:07:40.558 19:41:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:07:40.558 19:41:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:07:40.558 19:41:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:40.558 19:41:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:07:40.558 19:41:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:07:40.558 19:41:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:07:40.558 19:41:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:07:40.558 19:41:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:07:40.558 19:41:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:40.558 19:41:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:07:40.558 19:41:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:07:40.558 19:41:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:07:40.558 19:41:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 63838 00:07:45.818 00:07:45.819 job0: (groupid=0, jobs=1): err= 0: pid=63859: Tue Nov 26 19:41:40 2024 00:07:45.819 read: IOPS=14.0k, BW=54.7MiB/s (57.3MB/s)(328MiB/6000msec) 00:07:45.819 slat (nsec): min=1390, max=16201k, avg=42825.60, stdev=188392.73 00:07:45.819 clat (usec): min=1227, max=21847, avg=6237.78, stdev=1298.35 00:07:45.819 lat (usec): min=1233, max=21888, avg=6280.61, stdev=1303.37 00:07:45.819 clat percentiles (usec): 00:07:45.819 | 1.00th=[ 3195], 5.00th=[ 4621], 10.00th=[ 5211], 20.00th=[ 5538], 00:07:45.819 | 30.00th=[ 5735], 40.00th=[ 5866], 50.00th=[ 5997], 60.00th=[ 6194], 00:07:45.819 | 70.00th=[ 6390], 80.00th=[ 6783], 90.00th=[ 7701], 95.00th=[ 8848], 00:07:45.819 | 99.00th=[10683], 99.50th=[11469], 99.90th=[17433], 99.95th=[17433], 00:07:45.819 | 99.99th=[17433] 00:07:45.819 bw ( KiB/s): min=19096, max=38432, per=50.99%, avg=28555.91, stdev=6391.37, samples=11 00:07:45.819 iops : min= 4774, max= 9608, avg=7138.91, stdev=1597.87, samples=11 00:07:45.819 write: IOPS=8140, BW=31.8MiB/s (33.3MB/s)(171MiB/5377msec); 0 zone resets 00:07:45.819 slat (usec): min=2, max=3415, avg=49.08, stdev=129.24 00:07:45.819 clat (usec): min=1032, max=11951, avg=5408.58, stdev=1050.37 00:07:45.819 lat (usec): min=1051, max=11969, avg=5457.66, stdev=1054.67 00:07:45.819 clat percentiles (usec): 00:07:45.819 | 1.00th=[ 2409], 5.00th=[ 3195], 10.00th=[ 4293], 20.00th=[ 4948], 00:07:45.819 | 30.00th=[ 5145], 40.00th=[ 5342], 50.00th=[ 5473], 60.00th=[ 5604], 00:07:45.819 | 70.00th=[ 5735], 80.00th=[ 5932], 90.00th=[ 6456], 95.00th=[ 6980], 00:07:45.819 | 99.00th=[ 8455], 99.50th=[ 9241], 99.90th=[10552], 99.95th=[10945], 00:07:45.819 | 99.99th=[11469] 00:07:45.819 bw ( KiB/s): min=20000, max=37888, per=87.77%, avg=28580.36, stdev=6053.89, samples=11 00:07:45.819 iops : min= 5000, max= 9472, avg=7145.09, stdev=1513.47, samples=11 00:07:45.819 lat (msec) : 2=0.15%, 4=4.76%, 10=94.08%, 20=1.01%, 50=0.01% 00:07:45.819 cpu : usr=3.23%, sys=18.57%, ctx=7358, majf=0, minf=72 00:07:45.819 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:07:45.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:45.819 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:07:45.819 issued rwts: total=84010,43771,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:45.819 latency : target=0, window=0, percentile=100.00%, depth=128 00:07:45.819 00:07:45.819 Run status group 0 (all jobs): 00:07:45.819 READ: bw=54.7MiB/s (57.3MB/s), 54.7MiB/s-54.7MiB/s (57.3MB/s-57.3MB/s), io=328MiB (344MB), run=6000-6000msec 00:07:45.819 WRITE: bw=31.8MiB/s (33.3MB/s), 31.8MiB/s-31.8MiB/s (33.3MB/s-33.3MB/s), io=171MiB (179MB), run=5377-5377msec 00:07:45.819 00:07:45.819 Disk stats (read/write): 00:07:45.819 nvme0n1: ios=82981/42758, merge=0/0, ticks=498036/219155, in_queue=717191, util=98.53% 00:07:45.819 19:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:07:45.819 19:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:07:45.819 19:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:07:45.819 19:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:07:45.819 19:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:45.819 19:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:07:45.819 19:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:07:45.819 19:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:07:45.819 19:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:07:45.819 19:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:07:45.819 19:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:45.819 19:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:07:45.819 19:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:07:45.819 19:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:07:45.819 19:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:07:45.819 19:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=63942 00:07:45.819 19:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:07:45.819 19:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:07:45.819 [global] 00:07:45.819 thread=1 00:07:45.819 invalidate=1 00:07:45.819 rw=randrw 00:07:45.819 time_based=1 00:07:45.819 runtime=6 00:07:45.819 ioengine=libaio 00:07:45.819 direct=1 00:07:45.819 bs=4096 00:07:45.819 iodepth=128 00:07:45.819 norandommap=0 00:07:45.819 numjobs=1 00:07:45.819 00:07:45.819 verify_dump=1 00:07:45.819 verify_backlog=512 00:07:45.819 verify_state_save=0 00:07:45.819 do_verify=1 00:07:45.819 verify=crc32c-intel 00:07:45.819 [job0] 00:07:45.819 filename=/dev/nvme0n1 00:07:45.819 Could not set queue depth (nvme0n1) 00:07:45.819 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:07:45.819 fio-3.35 00:07:45.819 Starting 1 thread 00:07:46.385 19:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:07:46.644 19:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:07:46.902 19:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:07:46.902 19:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:07:46.902 19:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:46.902 19:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:07:46.902 19:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:07:46.902 19:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:07:46.902 19:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:07:46.902 19:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:07:46.902 19:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:46.902 19:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:07:46.902 19:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:07:46.902 19:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:07:46.902 19:41:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:07:47.166 19:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:07:47.423 19:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:07:47.423 19:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:07:47.423 19:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:47.423 19:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:07:47.423 19:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:07:47.423 19:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:07:47.423 19:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:07:47.423 19:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:07:47.423 19:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:47.423 19:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:07:47.423 19:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:07:47.423 19:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:07:47.423 19:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 63942 00:07:51.598 00:07:51.598 job0: (groupid=0, jobs=1): err= 0: pid=63968: Tue Nov 26 19:41:46 2024 00:07:51.599 read: IOPS=16.3k, BW=63.6MiB/s (66.7MB/s)(382MiB/6005msec) 00:07:51.599 slat (nsec): min=1376, max=4711.3k, avg=31846.61, stdev=144805.42 00:07:51.599 clat (usec): min=202, max=10877, avg=5430.66, stdev=1362.66 00:07:51.599 lat (usec): min=210, max=10888, avg=5462.51, stdev=1374.65 00:07:51.599 clat percentiles (usec): 00:07:51.599 | 1.00th=[ 1975], 5.00th=[ 3064], 10.00th=[ 3523], 20.00th=[ 4228], 00:07:51.599 | 30.00th=[ 5080], 40.00th=[ 5473], 50.00th=[ 5735], 60.00th=[ 5866], 00:07:51.599 | 70.00th=[ 5997], 80.00th=[ 6194], 90.00th=[ 6587], 95.00th=[ 7439], 00:07:51.599 | 99.00th=[ 9241], 99.50th=[ 9503], 99.90th=[ 9896], 99.95th=[10028], 00:07:51.599 | 99.99th=[10814] 00:07:51.599 bw ( KiB/s): min=11544, max=56768, per=52.15%, avg=33952.18, stdev=12774.77, samples=11 00:07:51.599 iops : min= 2886, max=14192, avg=8488.00, stdev=3193.63, samples=11 00:07:51.599 write: IOPS=9992, BW=39.0MiB/s (40.9MB/s)(201MiB/5141msec); 0 zone resets 00:07:51.599 slat (usec): min=2, max=4365, avg=37.81, stdev=105.69 00:07:51.599 clat (usec): min=414, max=10190, avg=4510.37, stdev=1352.22 00:07:51.599 lat (usec): min=432, max=10207, avg=4548.18, stdev=1364.49 00:07:51.599 clat percentiles (usec): 00:07:51.599 | 1.00th=[ 1631], 5.00th=[ 2245], 10.00th=[ 2573], 20.00th=[ 2999], 00:07:51.599 | 30.00th=[ 3490], 40.00th=[ 4621], 50.00th=[ 5014], 60.00th=[ 5211], 00:07:51.599 | 70.00th=[ 5407], 80.00th=[ 5604], 90.00th=[ 5800], 95.00th=[ 6063], 00:07:51.599 | 99.00th=[ 7832], 99.50th=[ 8291], 99.90th=[ 9372], 99.95th=[ 9503], 00:07:51.599 | 99.99th=[ 9896] 00:07:51.599 bw ( KiB/s): min=12040, max=57224, per=85.09%, avg=34012.64, stdev=12522.14, samples=11 00:07:51.599 iops : min= 3010, max=14306, avg=8503.09, stdev=3130.45, samples=11 00:07:51.599 lat (usec) : 250=0.01%, 500=0.02%, 750=0.02%, 1000=0.04% 00:07:51.599 lat (msec) : 2=1.56%, 4=21.56%, 10=76.77%, 20=0.04% 00:07:51.599 cpu : usr=4.31%, sys=19.40%, ctx=8641, majf=0, minf=127 00:07:51.599 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:07:51.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:51.599 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:07:51.599 issued rwts: total=97737,51373,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:51.599 latency : target=0, window=0, percentile=100.00%, depth=128 00:07:51.599 00:07:51.599 Run status group 0 (all jobs): 00:07:51.599 READ: bw=63.6MiB/s (66.7MB/s), 63.6MiB/s-63.6MiB/s (66.7MB/s-66.7MB/s), io=382MiB (400MB), run=6005-6005msec 00:07:51.599 WRITE: bw=39.0MiB/s (40.9MB/s), 39.0MiB/s-39.0MiB/s (40.9MB/s-40.9MB/s), io=201MiB (210MB), run=5141-5141msec 00:07:51.599 00:07:51.599 Disk stats (read/write): 00:07:51.599 nvme0n1: ios=96640/50423, merge=0/0, ticks=505107/213266, in_queue=718373, util=98.58% 00:07:51.599 19:41:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:51.857 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:07:51.857 19:41:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:51.857 19:41:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:07:51.857 19:41:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:07:51.857 19:41:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:51.857 19:41:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:51.857 19:41:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:07:51.857 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:07:51.857 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:52.115 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:07:52.115 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:07:52.115 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:07:52.115 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:07:52.115 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:52.115 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:07:52.115 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:52.115 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:07:52.115 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:52.115 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:52.115 rmmod nvme_tcp 00:07:52.115 rmmod nvme_fabrics 00:07:52.115 rmmod nvme_keyring 00:07:52.115 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:52.115 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:07:52.115 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:07:52.115 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 63754 ']' 00:07:52.115 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 63754 00:07:52.115 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 63754 ']' 00:07:52.115 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 63754 00:07:52.115 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:07:52.115 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:52.115 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63754 00:07:52.115 killing process with pid 63754 00:07:52.115 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:52.115 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:52.115 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63754' 00:07:52.115 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 63754 00:07:52.115 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 63754 00:07:52.373 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:52.373 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:52.373 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:52.373 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:07:52.373 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:52.373 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:07:52.373 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:07:52.373 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:52.373 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:52.373 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:52.373 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:52.373 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:52.373 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:52.373 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:52.373 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:52.373 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:52.373 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:52.373 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:52.373 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:52.632 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:52.632 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:52.632 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:52.632 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:52.632 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:52.632 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:52.632 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:52.632 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:07:52.632 ************************************ 00:07:52.632 END TEST nvmf_target_multipath 00:07:52.632 ************************************ 00:07:52.632 00:07:52.632 real 0m18.864s 00:07:52.632 user 1m10.699s 00:07:52.632 sys 0m7.718s 00:07:52.632 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.632 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:52.632 19:41:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:07:52.632 19:41:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:52.632 19:41:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.632 19:41:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:52.632 ************************************ 00:07:52.632 START TEST nvmf_zcopy 00:07:52.632 ************************************ 00:07:52.632 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:07:52.632 * Looking for test storage... 00:07:52.632 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:52.632 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:52.632 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:52.632 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:07:52.890 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:52.890 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:52.890 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:52.890 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:52.890 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:07:52.890 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:07:52.890 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:07:52.890 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:07:52.890 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:07:52.890 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:07:52.890 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:07:52.890 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:52.890 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:07:52.890 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:07:52.890 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:52.890 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:52.890 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:07:52.890 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:07:52.890 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:52.890 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:07:52.890 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:07:52.890 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:07:52.890 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:07:52.890 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:52.890 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:52.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.891 --rc genhtml_branch_coverage=1 00:07:52.891 --rc genhtml_function_coverage=1 00:07:52.891 --rc genhtml_legend=1 00:07:52.891 --rc geninfo_all_blocks=1 00:07:52.891 --rc geninfo_unexecuted_blocks=1 00:07:52.891 00:07:52.891 ' 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:52.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.891 --rc genhtml_branch_coverage=1 00:07:52.891 --rc genhtml_function_coverage=1 00:07:52.891 --rc genhtml_legend=1 00:07:52.891 --rc geninfo_all_blocks=1 00:07:52.891 --rc geninfo_unexecuted_blocks=1 00:07:52.891 00:07:52.891 ' 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:52.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.891 --rc genhtml_branch_coverage=1 00:07:52.891 --rc genhtml_function_coverage=1 00:07:52.891 --rc genhtml_legend=1 00:07:52.891 --rc geninfo_all_blocks=1 00:07:52.891 --rc geninfo_unexecuted_blocks=1 00:07:52.891 00:07:52.891 ' 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:52.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.891 --rc genhtml_branch_coverage=1 00:07:52.891 --rc genhtml_function_coverage=1 00:07:52.891 --rc genhtml_legend=1 00:07:52.891 --rc geninfo_all_blocks=1 00:07:52.891 --rc geninfo_unexecuted_blocks=1 00:07:52.891 00:07:52.891 ' 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=91838eb1-5852-43eb-90b2-09876f360ab2 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:52.891 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:52.891 Cannot find device "nvmf_init_br" 00:07:52.891 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:07:52.892 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:52.892 Cannot find device "nvmf_init_br2" 00:07:52.892 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:07:52.892 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:52.892 Cannot find device "nvmf_tgt_br" 00:07:52.892 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:07:52.892 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:52.892 Cannot find device "nvmf_tgt_br2" 00:07:52.892 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:07:52.892 19:41:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:52.892 Cannot find device "nvmf_init_br" 00:07:52.892 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:07:52.892 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:52.892 Cannot find device "nvmf_init_br2" 00:07:52.892 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:07:52.892 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:52.892 Cannot find device "nvmf_tgt_br" 00:07:52.892 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:07:52.892 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:52.892 Cannot find device "nvmf_tgt_br2" 00:07:52.892 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:07:52.892 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:52.892 Cannot find device "nvmf_br" 00:07:52.892 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:07:52.892 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:52.892 Cannot find device "nvmf_init_if" 00:07:52.892 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:07:52.892 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:52.892 Cannot find device "nvmf_init_if2" 00:07:52.892 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:07:52.892 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:52.892 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:52.892 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:07:52.892 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:52.892 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:52.892 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:07:52.892 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:52.892 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:52.892 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:52.892 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:52.892 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:52.892 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:52.892 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:53.153 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:53.153 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:53.153 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:53.153 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:53.153 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:53.153 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:53.153 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:53.153 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:53.153 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:53.153 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:53.153 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:53.153 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:53.153 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:53.153 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:53.153 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:53.153 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:53.153 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:53.153 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:53.153 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:53.153 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:53.153 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:53.153 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:53.153 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:53.153 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:53.153 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:53.153 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:53.153 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:53.153 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:07:53.153 00:07:53.153 --- 10.0.0.3 ping statistics --- 00:07:53.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.153 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:07:53.153 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:53.153 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:53.153 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:07:53.153 00:07:53.153 --- 10.0.0.4 ping statistics --- 00:07:53.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.153 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:07:53.153 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:53.154 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:53.154 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:07:53.154 00:07:53.154 --- 10.0.0.1 ping statistics --- 00:07:53.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.154 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:07:53.154 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:53.154 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:53.154 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.039 ms 00:07:53.154 00:07:53.154 --- 10.0.0.2 ping statistics --- 00:07:53.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.154 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:07:53.154 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:53.154 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:07:53.154 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:53.154 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:53.154 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:53.154 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:53.154 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:53.154 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:53.154 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:53.154 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:07:53.154 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:53.154 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:53.154 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:53.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.154 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=64265 00:07:53.154 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 64265 00:07:53.154 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 64265 ']' 00:07:53.154 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.154 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:53.154 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.154 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:53.154 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:53.154 19:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:53.154 [2024-11-26 19:41:48.329008] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:07:53.154 [2024-11-26 19:41:48.329070] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:53.415 [2024-11-26 19:41:48.469726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.415 [2024-11-26 19:41:48.505302] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:53.415 [2024-11-26 19:41:48.505339] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:53.415 [2024-11-26 19:41:48.505346] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:53.415 [2024-11-26 19:41:48.505351] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:53.415 [2024-11-26 19:41:48.505355] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:53.415 [2024-11-26 19:41:48.505611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.415 [2024-11-26 19:41:48.537700] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:53.983 19:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:53.983 19:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:07:53.983 19:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:53.983 19:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:53.983 19:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:54.242 19:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:54.242 19:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:07:54.242 19:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:07:54.242 19:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.242 19:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:54.242 [2024-11-26 19:41:49.235459] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:54.242 19:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.242 19:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:54.242 19:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.242 19:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:54.242 19:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.242 19:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:07:54.242 19:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.242 19:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:54.242 [2024-11-26 19:41:49.252020] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:54.242 19:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.242 19:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:54.242 19:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.242 19:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:54.242 19:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.242 19:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:07:54.242 19:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.242 19:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:54.242 malloc0 00:07:54.242 19:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.242 19:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:07:54.242 19:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.242 19:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:54.242 19:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.242 19:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:07:54.242 19:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:07:54.242 19:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:07:54.242 19:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:07:54.242 19:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:54.242 19:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:54.242 { 00:07:54.242 "params": { 00:07:54.242 "name": "Nvme$subsystem", 00:07:54.242 "trtype": "$TEST_TRANSPORT", 00:07:54.242 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:54.242 "adrfam": "ipv4", 00:07:54.242 "trsvcid": "$NVMF_PORT", 00:07:54.242 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:54.242 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:54.242 "hdgst": ${hdgst:-false}, 00:07:54.242 "ddgst": ${ddgst:-false} 00:07:54.242 }, 00:07:54.242 "method": "bdev_nvme_attach_controller" 00:07:54.242 } 00:07:54.242 EOF 00:07:54.242 )") 00:07:54.242 19:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:07:54.242 19:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:07:54.242 19:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:07:54.242 19:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:54.242 "params": { 00:07:54.242 "name": "Nvme1", 00:07:54.242 "trtype": "tcp", 00:07:54.242 "traddr": "10.0.0.3", 00:07:54.242 "adrfam": "ipv4", 00:07:54.242 "trsvcid": "4420", 00:07:54.242 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:54.242 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:54.242 "hdgst": false, 00:07:54.242 "ddgst": false 00:07:54.242 }, 00:07:54.242 "method": "bdev_nvme_attach_controller" 00:07:54.242 }' 00:07:54.242 [2024-11-26 19:41:49.316629] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:07:54.242 [2024-11-26 19:41:49.316676] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64298 ] 00:07:54.242 [2024-11-26 19:41:49.453819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.499 [2024-11-26 19:41:49.490433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.499 [2024-11-26 19:41:49.530787] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:54.499 Running I/O for 10 seconds... 00:07:56.812 6768.00 IOPS, 52.88 MiB/s [2024-11-26T19:41:52.998Z] 6810.00 IOPS, 53.20 MiB/s [2024-11-26T19:41:53.961Z] 6665.33 IOPS, 52.07 MiB/s [2024-11-26T19:41:54.902Z] 6707.00 IOPS, 52.40 MiB/s [2024-11-26T19:41:55.838Z] 6721.80 IOPS, 52.51 MiB/s [2024-11-26T19:41:56.774Z] 6748.50 IOPS, 52.72 MiB/s [2024-11-26T19:41:57.742Z] 6766.14 IOPS, 52.86 MiB/s [2024-11-26T19:41:58.675Z] 6778.62 IOPS, 52.96 MiB/s [2024-11-26T19:42:00.047Z] 6940.44 IOPS, 54.22 MiB/s [2024-11-26T19:42:00.047Z] 7099.50 IOPS, 55.46 MiB/s 00:08:04.800 Latency(us) 00:08:04.800 [2024-11-26T19:42:00.047Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.800 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:04.800 Verification LBA range: start 0x0 length 0x1000 00:08:04.800 Nvme1n1 : 10.01 7103.91 55.50 0.00 0.00 17970.00 200.86 27424.30 00:08:04.800 [2024-11-26T19:42:00.047Z] =================================================================================================================== 00:08:04.800 [2024-11-26T19:42:00.047Z] Total : 7103.91 55.50 0.00 0.00 17970.00 200.86 27424.30 00:08:04.800 19:41:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=64421 00:08:04.800 19:41:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:04.800 19:41:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:04.800 19:41:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:04.800 19:41:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:04.800 19:41:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:04.800 19:41:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:04.800 19:41:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:04.800 19:41:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:04.800 { 00:08:04.800 "params": { 00:08:04.801 "name": "Nvme$subsystem", 00:08:04.801 "trtype": "$TEST_TRANSPORT", 00:08:04.801 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:04.801 "adrfam": "ipv4", 00:08:04.801 "trsvcid": "$NVMF_PORT", 00:08:04.801 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:04.801 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:04.801 "hdgst": ${hdgst:-false}, 00:08:04.801 "ddgst": ${ddgst:-false} 00:08:04.801 }, 00:08:04.801 "method": "bdev_nvme_attach_controller" 00:08:04.801 } 00:08:04.801 EOF 00:08:04.801 )") 00:08:04.801 19:41:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:04.801 19:41:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:04.801 [2024-11-26 19:41:59.763680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.801 [2024-11-26 19:41:59.763820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.801 19:41:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:04.801 19:41:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:04.801 "params": { 00:08:04.801 "name": "Nvme1", 00:08:04.801 "trtype": "tcp", 00:08:04.801 "traddr": "10.0.0.3", 00:08:04.801 "adrfam": "ipv4", 00:08:04.801 "trsvcid": "4420", 00:08:04.801 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:04.801 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:04.801 "hdgst": false, 00:08:04.801 "ddgst": false 00:08:04.801 }, 00:08:04.801 "method": "bdev_nvme_attach_controller" 00:08:04.801 }' 00:08:04.801 [2024-11-26 19:41:59.771658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.801 [2024-11-26 19:41:59.771676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.801 [2024-11-26 19:41:59.779650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.801 [2024-11-26 19:41:59.779666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.801 [2024-11-26 19:41:59.787651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.801 [2024-11-26 19:41:59.787665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.801 [2024-11-26 19:41:59.790127] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:08:04.801 [2024-11-26 19:41:59.790182] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64421 ] 00:08:04.801 [2024-11-26 19:41:59.795652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.801 [2024-11-26 19:41:59.795736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.801 [2024-11-26 19:41:59.807660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.801 [2024-11-26 19:41:59.807726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.801 [2024-11-26 19:41:59.815659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.801 [2024-11-26 19:41:59.815721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.801 [2024-11-26 19:41:59.823664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.801 [2024-11-26 19:41:59.823736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.801 [2024-11-26 19:41:59.831663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.801 [2024-11-26 19:41:59.831726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.801 [2024-11-26 19:41:59.839665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.801 [2024-11-26 19:41:59.839726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.801 [2024-11-26 19:41:59.847668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.801 [2024-11-26 19:41:59.847727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.801 [2024-11-26 19:41:59.855671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.801 [2024-11-26 19:41:59.855733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.801 [2024-11-26 19:41:59.867681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.801 [2024-11-26 19:41:59.867779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.801 [2024-11-26 19:41:59.875675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.801 [2024-11-26 19:41:59.875690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.801 [2024-11-26 19:41:59.883675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.801 [2024-11-26 19:41:59.883689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.801 [2024-11-26 19:41:59.891676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.801 [2024-11-26 19:41:59.891690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.801 [2024-11-26 19:41:59.899678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.801 [2024-11-26 19:41:59.899693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.801 [2024-11-26 19:41:59.907679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.801 [2024-11-26 19:41:59.907692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.801 [2024-11-26 19:41:59.915681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.801 [2024-11-26 19:41:59.915695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.801 [2024-11-26 19:41:59.923683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.801 [2024-11-26 19:41:59.923696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.801 [2024-11-26 19:41:59.925959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.801 [2024-11-26 19:41:59.935689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.801 [2024-11-26 19:41:59.935784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.801 [2024-11-26 19:41:59.943689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.801 [2024-11-26 19:41:59.943755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.801 [2024-11-26 19:41:59.951689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.801 [2024-11-26 19:41:59.951752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.801 [2024-11-26 19:41:59.958982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.801 [2024-11-26 19:41:59.959693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.801 [2024-11-26 19:41:59.959759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.801 [2024-11-26 19:41:59.967695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.801 [2024-11-26 19:41:59.967757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.801 [2024-11-26 19:41:59.975700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.801 [2024-11-26 19:41:59.975775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.801 [2024-11-26 19:41:59.983700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.801 [2024-11-26 19:41:59.983772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.801 [2024-11-26 19:41:59.991702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.801 [2024-11-26 19:41:59.991774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.801 [2024-11-26 19:41:59.997136] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:04.801 [2024-11-26 19:41:59.999702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.801 [2024-11-26 19:41:59.999773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.801 [2024-11-26 19:42:00.007705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.801 [2024-11-26 19:42:00.007788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.801 [2024-11-26 19:42:00.019709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.801 [2024-11-26 19:42:00.019782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.801 [2024-11-26 19:42:00.027709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.801 [2024-11-26 19:42:00.027775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.801 [2024-11-26 19:42:00.035721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.801 [2024-11-26 19:42:00.035815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:04.801 [2024-11-26 19:42:00.043724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:04.801 [2024-11-26 19:42:00.043810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.065 [2024-11-26 19:42:00.051728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.065 [2024-11-26 19:42:00.051812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.065 [2024-11-26 19:42:00.059734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.065 [2024-11-26 19:42:00.059818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.065 [2024-11-26 19:42:00.067740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.065 [2024-11-26 19:42:00.067819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.065 [2024-11-26 19:42:00.075745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.065 [2024-11-26 19:42:00.075825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.065 [2024-11-26 19:42:00.083747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.065 [2024-11-26 19:42:00.083823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.065 [2024-11-26 19:42:00.091758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.065 [2024-11-26 19:42:00.091843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.065 [2024-11-26 19:42:00.099758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.065 [2024-11-26 19:42:00.099837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.065 Running I/O for 5 seconds... 00:08:05.065 [2024-11-26 19:42:00.107762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.065 [2024-11-26 19:42:00.107836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.065 [2024-11-26 19:42:00.119963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.065 [2024-11-26 19:42:00.120051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.065 [2024-11-26 19:42:00.129377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.065 [2024-11-26 19:42:00.129465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.065 [2024-11-26 19:42:00.144200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.065 [2024-11-26 19:42:00.144291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.065 [2024-11-26 19:42:00.151790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.065 [2024-11-26 19:42:00.151811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.065 [2024-11-26 19:42:00.160477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.065 [2024-11-26 19:42:00.160498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.065 [2024-11-26 19:42:00.169616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.065 [2024-11-26 19:42:00.169639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.065 [2024-11-26 19:42:00.178775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.065 [2024-11-26 19:42:00.178794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.065 [2024-11-26 19:42:00.187252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.065 [2024-11-26 19:42:00.187275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.065 [2024-11-26 19:42:00.195825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.065 [2024-11-26 19:42:00.195847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.065 [2024-11-26 19:42:00.204410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.065 [2024-11-26 19:42:00.204433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.065 [2024-11-26 19:42:00.213913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.065 [2024-11-26 19:42:00.214014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.065 [2024-11-26 19:42:00.223220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.065 [2024-11-26 19:42:00.223242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.065 [2024-11-26 19:42:00.232362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.065 [2024-11-26 19:42:00.232385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.065 [2024-11-26 19:42:00.240933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.065 [2024-11-26 19:42:00.241028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.065 [2024-11-26 19:42:00.249629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.065 [2024-11-26 19:42:00.249652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.065 [2024-11-26 19:42:00.258339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.065 [2024-11-26 19:42:00.258360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.065 [2024-11-26 19:42:00.265111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.065 [2024-11-26 19:42:00.265132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.065 [2024-11-26 19:42:00.275417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.065 [2024-11-26 19:42:00.275438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.065 [2024-11-26 19:42:00.284682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.065 [2024-11-26 19:42:00.284782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.065 [2024-11-26 19:42:00.291518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.065 [2024-11-26 19:42:00.291542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.066 [2024-11-26 19:42:00.302459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.066 [2024-11-26 19:42:00.302482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.332 [2024-11-26 19:42:00.311480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.332 [2024-11-26 19:42:00.311500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.332 [2024-11-26 19:42:00.320081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.332 [2024-11-26 19:42:00.320174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.332 [2024-11-26 19:42:00.328773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.332 [2024-11-26 19:42:00.328793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.332 [2024-11-26 19:42:00.337305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.332 [2024-11-26 19:42:00.337326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.332 [2024-11-26 19:42:00.345904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.332 [2024-11-26 19:42:00.345924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.333 [2024-11-26 19:42:00.354334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.333 [2024-11-26 19:42:00.354354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.333 [2024-11-26 19:42:00.362928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.333 [2024-11-26 19:42:00.363015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.333 [2024-11-26 19:42:00.371609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.333 [2024-11-26 19:42:00.371630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.333 [2024-11-26 19:42:00.380927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.333 [2024-11-26 19:42:00.380948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.333 [2024-11-26 19:42:00.390203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.333 [2024-11-26 19:42:00.390290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.333 [2024-11-26 19:42:00.398840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.333 [2024-11-26 19:42:00.398861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.333 [2024-11-26 19:42:00.408055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.333 [2024-11-26 19:42:00.408076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.333 [2024-11-26 19:42:00.417332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.333 [2024-11-26 19:42:00.417418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.333 [2024-11-26 19:42:00.426928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.333 [2024-11-26 19:42:00.426949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.333 [2024-11-26 19:42:00.435456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.333 [2024-11-26 19:42:00.435477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.333 [2024-11-26 19:42:00.444824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.333 [2024-11-26 19:42:00.444845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.333 [2024-11-26 19:42:00.451515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.333 [2024-11-26 19:42:00.451535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.333 [2024-11-26 19:42:00.462463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.333 [2024-11-26 19:42:00.462483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.333 [2024-11-26 19:42:00.469179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.333 [2024-11-26 19:42:00.469201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.333 [2024-11-26 19:42:00.479486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.333 [2024-11-26 19:42:00.479508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.333 [2024-11-26 19:42:00.486283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.333 [2024-11-26 19:42:00.486304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.333 [2024-11-26 19:42:00.497318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.333 [2024-11-26 19:42:00.497339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.333 [2024-11-26 19:42:00.504581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.333 [2024-11-26 19:42:00.504672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.333 [2024-11-26 19:42:00.515478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.333 [2024-11-26 19:42:00.515560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.333 [2024-11-26 19:42:00.524203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.333 [2024-11-26 19:42:00.524224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.333 [2024-11-26 19:42:00.533409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.333 [2024-11-26 19:42:00.533430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.333 [2024-11-26 19:42:00.542530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.333 [2024-11-26 19:42:00.542616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.333 [2024-11-26 19:42:00.551741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.333 [2024-11-26 19:42:00.551762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.333 [2024-11-26 19:42:00.561125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.333 [2024-11-26 19:42:00.561209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.333 [2024-11-26 19:42:00.569862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.333 [2024-11-26 19:42:00.569883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.591 [2024-11-26 19:42:00.578995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.591 [2024-11-26 19:42:00.579015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.591 [2024-11-26 19:42:00.587556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.591 [2024-11-26 19:42:00.587577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.591 [2024-11-26 19:42:00.596175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.591 [2024-11-26 19:42:00.596262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.591 [2024-11-26 19:42:00.605502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.591 [2024-11-26 19:42:00.605523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.591 [2024-11-26 19:42:00.614809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.591 [2024-11-26 19:42:00.614829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.591 [2024-11-26 19:42:00.623867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.591 [2024-11-26 19:42:00.623888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.591 [2024-11-26 19:42:00.632938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.591 [2024-11-26 19:42:00.632959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.591 [2024-11-26 19:42:00.641586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.591 [2024-11-26 19:42:00.641608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.591 [2024-11-26 19:42:00.650811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.591 [2024-11-26 19:42:00.650832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.591 [2024-11-26 19:42:00.659348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.591 [2024-11-26 19:42:00.659368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.591 [2024-11-26 19:42:00.668589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.591 [2024-11-26 19:42:00.668610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.591 [2024-11-26 19:42:00.675285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.591 [2024-11-26 19:42:00.675306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.591 [2024-11-26 19:42:00.686385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.591 [2024-11-26 19:42:00.686407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.591 [2024-11-26 19:42:00.695067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.591 [2024-11-26 19:42:00.695088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.591 [2024-11-26 19:42:00.704307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.591 [2024-11-26 19:42:00.704328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.591 [2024-11-26 19:42:00.713043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.591 [2024-11-26 19:42:00.713139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.591 [2024-11-26 19:42:00.721791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.591 [2024-11-26 19:42:00.721812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.591 [2024-11-26 19:42:00.730342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.591 [2024-11-26 19:42:00.730364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.591 [2024-11-26 19:42:00.738892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.591 [2024-11-26 19:42:00.738912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.591 [2024-11-26 19:42:00.748229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.591 [2024-11-26 19:42:00.748324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.591 [2024-11-26 19:42:00.756964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.591 [2024-11-26 19:42:00.756985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.591 [2024-11-26 19:42:00.763652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.591 [2024-11-26 19:42:00.763737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.591 [2024-11-26 19:42:00.778518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.591 [2024-11-26 19:42:00.778542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.591 [2024-11-26 19:42:00.787167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.591 [2024-11-26 19:42:00.787251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.591 [2024-11-26 19:42:00.795857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.591 [2024-11-26 19:42:00.795889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.591 [2024-11-26 19:42:00.805052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.591 [2024-11-26 19:42:00.805072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.591 [2024-11-26 19:42:00.813526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.592 [2024-11-26 19:42:00.813547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.592 [2024-11-26 19:42:00.822487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.592 [2024-11-26 19:42:00.822573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.592 [2024-11-26 19:42:00.831596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.592 [2024-11-26 19:42:00.831616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.851 [2024-11-26 19:42:00.840874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.851 [2024-11-26 19:42:00.840958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.851 [2024-11-26 19:42:00.850033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.851 [2024-11-26 19:42:00.850057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.851 [2024-11-26 19:42:00.859277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.851 [2024-11-26 19:42:00.859298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.851 [2024-11-26 19:42:00.868512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.851 [2024-11-26 19:42:00.868600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.851 [2024-11-26 19:42:00.877087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.851 [2024-11-26 19:42:00.877108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.851 [2024-11-26 19:42:00.885653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.851 [2024-11-26 19:42:00.885673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.851 [2024-11-26 19:42:00.894195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.851 [2024-11-26 19:42:00.894216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.851 [2024-11-26 19:42:00.903524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.851 [2024-11-26 19:42:00.903611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.851 [2024-11-26 19:42:00.910329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.851 [2024-11-26 19:42:00.910351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.851 [2024-11-26 19:42:00.926229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.851 [2024-11-26 19:42:00.926250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.851 [2024-11-26 19:42:00.933619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.851 [2024-11-26 19:42:00.933705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.851 [2024-11-26 19:42:00.943881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.851 [2024-11-26 19:42:00.943962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.851 [2024-11-26 19:42:00.952624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.851 [2024-11-26 19:42:00.952646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.851 [2024-11-26 19:42:00.961979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.851 [2024-11-26 19:42:00.962000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.851 [2024-11-26 19:42:00.971150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.851 [2024-11-26 19:42:00.971233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.851 [2024-11-26 19:42:00.979729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.851 [2024-11-26 19:42:00.979749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.851 [2024-11-26 19:42:00.986424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.851 [2024-11-26 19:42:00.986504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.851 [2024-11-26 19:42:00.997979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.851 [2024-11-26 19:42:00.998000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.851 [2024-11-26 19:42:01.006750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.851 [2024-11-26 19:42:01.006781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.851 [2024-11-26 19:42:01.015998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.851 [2024-11-26 19:42:01.016020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.851 [2024-11-26 19:42:01.025322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.851 [2024-11-26 19:42:01.025407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.851 [2024-11-26 19:42:01.034161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.851 [2024-11-26 19:42:01.034181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.851 [2024-11-26 19:42:01.042712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.851 [2024-11-26 19:42:01.042744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.851 [2024-11-26 19:42:01.051356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.851 [2024-11-26 19:42:01.051380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.851 [2024-11-26 19:42:01.060547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.851 [2024-11-26 19:42:01.060638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.851 [2024-11-26 19:42:01.069283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.851 [2024-11-26 19:42:01.069305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.851 [2024-11-26 19:42:01.078609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.851 [2024-11-26 19:42:01.078630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:05.851 [2024-11-26 19:42:01.087264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:05.851 [2024-11-26 19:42:01.087348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.110 [2024-11-26 19:42:01.096707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.110 [2024-11-26 19:42:01.096729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.110 16381.00 IOPS, 127.98 MiB/s [2024-11-26T19:42:01.357Z] [2024-11-26 19:42:01.111526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.110 [2024-11-26 19:42:01.111548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.110 [2024-11-26 19:42:01.121705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.110 [2024-11-26 19:42:01.121727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.110 [2024-11-26 19:42:01.128449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.110 [2024-11-26 19:42:01.128470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.110 [2024-11-26 19:42:01.139563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.110 [2024-11-26 19:42:01.139584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.110 [2024-11-26 19:42:01.148904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.110 [2024-11-26 19:42:01.148924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.110 [2024-11-26 19:42:01.157407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.110 [2024-11-26 19:42:01.157428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.110 [2024-11-26 19:42:01.165704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.110 [2024-11-26 19:42:01.165725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.110 [2024-11-26 19:42:01.174050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.110 [2024-11-26 19:42:01.174134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.110 [2024-11-26 19:42:01.180845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.110 [2024-11-26 19:42:01.180864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.110 [2024-11-26 19:42:01.191862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.110 [2024-11-26 19:42:01.191883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.110 [2024-11-26 19:42:01.198532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.110 [2024-11-26 19:42:01.198553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.110 [2024-11-26 19:42:01.209373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.110 [2024-11-26 19:42:01.209394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.110 [2024-11-26 19:42:01.218157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.110 [2024-11-26 19:42:01.218178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.110 [2024-11-26 19:42:01.224857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.110 [2024-11-26 19:42:01.224877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.110 [2024-11-26 19:42:01.235929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.110 [2024-11-26 19:42:01.235954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.110 [2024-11-26 19:42:01.244819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.111 [2024-11-26 19:42:01.244843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.111 [2024-11-26 19:42:01.254149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.111 [2024-11-26 19:42:01.254171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.111 [2024-11-26 19:42:01.262763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.111 [2024-11-26 19:42:01.262790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.111 [2024-11-26 19:42:01.272082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.111 [2024-11-26 19:42:01.272104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.111 [2024-11-26 19:42:01.279028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.111 [2024-11-26 19:42:01.279047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.111 [2024-11-26 19:42:01.289108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.111 [2024-11-26 19:42:01.289128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.111 [2024-11-26 19:42:01.298257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.111 [2024-11-26 19:42:01.298345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.111 [2024-11-26 19:42:01.307612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.111 [2024-11-26 19:42:01.307634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.111 [2024-11-26 19:42:01.316791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.111 [2024-11-26 19:42:01.316812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.111 [2024-11-26 19:42:01.325288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.111 [2024-11-26 19:42:01.325309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.111 [2024-11-26 19:42:01.334520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.111 [2024-11-26 19:42:01.334544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.111 [2024-11-26 19:42:01.343868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.111 [2024-11-26 19:42:01.343963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.111 [2024-11-26 19:42:01.350617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.111 [2024-11-26 19:42:01.350639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.369 [2024-11-26 19:42:01.361492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.369 [2024-11-26 19:42:01.361513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.369 [2024-11-26 19:42:01.370367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.369 [2024-11-26 19:42:01.370455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.369 [2024-11-26 19:42:01.379698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.369 [2024-11-26 19:42:01.379720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.369 [2024-11-26 19:42:01.393833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.369 [2024-11-26 19:42:01.393854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.369 [2024-11-26 19:42:01.402562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.369 [2024-11-26 19:42:01.402583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.369 [2024-11-26 19:42:01.411906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.369 [2024-11-26 19:42:01.411927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.369 [2024-11-26 19:42:01.426499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.369 [2024-11-26 19:42:01.426521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.369 [2024-11-26 19:42:01.434100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.369 [2024-11-26 19:42:01.434121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.369 [2024-11-26 19:42:01.442691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.369 [2024-11-26 19:42:01.442713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.369 [2024-11-26 19:42:01.451553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.369 [2024-11-26 19:42:01.451575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.369 [2024-11-26 19:42:01.460738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.369 [2024-11-26 19:42:01.460763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.369 [2024-11-26 19:42:01.467432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.369 [2024-11-26 19:42:01.467459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.369 [2024-11-26 19:42:01.479053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.369 [2024-11-26 19:42:01.479076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.369 [2024-11-26 19:42:01.487701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.369 [2024-11-26 19:42:01.487723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.369 [2024-11-26 19:42:01.496203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.369 [2024-11-26 19:42:01.496224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.369 [2024-11-26 19:42:01.503154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.369 [2024-11-26 19:42:01.503175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.369 [2024-11-26 19:42:01.513984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.369 [2024-11-26 19:42:01.514007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.369 [2024-11-26 19:42:01.523336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.369 [2024-11-26 19:42:01.523357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.369 [2024-11-26 19:42:01.532380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.369 [2024-11-26 19:42:01.532401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.369 [2024-11-26 19:42:01.541013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.369 [2024-11-26 19:42:01.541110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.369 [2024-11-26 19:42:01.550364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.369 [2024-11-26 19:42:01.550386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.369 [2024-11-26 19:42:01.559608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.369 [2024-11-26 19:42:01.559629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.369 [2024-11-26 19:42:01.566311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.369 [2024-11-26 19:42:01.566332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.369 [2024-11-26 19:42:01.577241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.369 [2024-11-26 19:42:01.577266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.369 [2024-11-26 19:42:01.586379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.369 [2024-11-26 19:42:01.586400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.369 [2024-11-26 19:42:01.595066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.369 [2024-11-26 19:42:01.595085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.369 [2024-11-26 19:42:01.603682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.369 [2024-11-26 19:42:01.603704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.369 [2024-11-26 19:42:01.612269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.369 [2024-11-26 19:42:01.612360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.628 [2024-11-26 19:42:01.620998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.628 [2024-11-26 19:42:01.621019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.628 [2024-11-26 19:42:01.629631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.628 [2024-11-26 19:42:01.629653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.628 [2024-11-26 19:42:01.638194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.628 [2024-11-26 19:42:01.638285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.628 [2024-11-26 19:42:01.645250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.628 [2024-11-26 19:42:01.645334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.628 [2024-11-26 19:42:01.656376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.628 [2024-11-26 19:42:01.656460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.628 [2024-11-26 19:42:01.665410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.628 [2024-11-26 19:42:01.665432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.628 [2024-11-26 19:42:01.674716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.628 [2024-11-26 19:42:01.674760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.628 [2024-11-26 19:42:01.683420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.628 [2024-11-26 19:42:01.683444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.628 [2024-11-26 19:42:01.692666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.628 [2024-11-26 19:42:01.692688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.628 [2024-11-26 19:42:01.701842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.628 [2024-11-26 19:42:01.701867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.628 [2024-11-26 19:42:01.711097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.628 [2024-11-26 19:42:01.711121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.628 [2024-11-26 19:42:01.719703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.628 [2024-11-26 19:42:01.719727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.628 [2024-11-26 19:42:01.726407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.628 [2024-11-26 19:42:01.726431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.628 [2024-11-26 19:42:01.737561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.628 [2024-11-26 19:42:01.737585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.628 [2024-11-26 19:42:01.744910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.628 [2024-11-26 19:42:01.744932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.628 [2024-11-26 19:42:01.755491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.628 [2024-11-26 19:42:01.755520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.628 [2024-11-26 19:42:01.764630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.628 [2024-11-26 19:42:01.764655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.628 [2024-11-26 19:42:01.773219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.628 [2024-11-26 19:42:01.773242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.628 [2024-11-26 19:42:01.782120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.628 [2024-11-26 19:42:01.782145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.628 [2024-11-26 19:42:01.790896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.628 [2024-11-26 19:42:01.790922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.628 [2024-11-26 19:42:01.799463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.628 [2024-11-26 19:42:01.799488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.628 [2024-11-26 19:42:01.808746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.628 [2024-11-26 19:42:01.808779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.628 [2024-11-26 19:42:01.817279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.628 [2024-11-26 19:42:01.817302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.628 [2024-11-26 19:42:01.825929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.628 [2024-11-26 19:42:01.825952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.628 [2024-11-26 19:42:01.832719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.628 [2024-11-26 19:42:01.832741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.628 [2024-11-26 19:42:01.843332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.628 [2024-11-26 19:42:01.843358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.628 [2024-11-26 19:42:01.852109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.628 [2024-11-26 19:42:01.852138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.628 [2024-11-26 19:42:01.860585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.628 [2024-11-26 19:42:01.860609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.628 [2024-11-26 19:42:01.869801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.628 [2024-11-26 19:42:01.869824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.887 [2024-11-26 19:42:01.878340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.887 [2024-11-26 19:42:01.878363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.887 [2024-11-26 19:42:01.886990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.887 [2024-11-26 19:42:01.887013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.887 [2024-11-26 19:42:01.895654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.887 [2024-11-26 19:42:01.895677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.887 [2024-11-26 19:42:01.904273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.887 [2024-11-26 19:42:01.904296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.887 [2024-11-26 19:42:01.912847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.887 [2024-11-26 19:42:01.912870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.887 [2024-11-26 19:42:01.919524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.887 [2024-11-26 19:42:01.919546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.887 [2024-11-26 19:42:01.930578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.887 [2024-11-26 19:42:01.930600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.887 [2024-11-26 19:42:01.937858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.887 [2024-11-26 19:42:01.937880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.887 [2024-11-26 19:42:01.948166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.887 [2024-11-26 19:42:01.948189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.887 [2024-11-26 19:42:01.957089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.887 [2024-11-26 19:42:01.957112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.887 [2024-11-26 19:42:01.965672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.887 [2024-11-26 19:42:01.965695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.887 [2024-11-26 19:42:01.974252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.887 [2024-11-26 19:42:01.974276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.887 [2024-11-26 19:42:01.983060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.887 [2024-11-26 19:42:01.983082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.887 [2024-11-26 19:42:01.991635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.887 [2024-11-26 19:42:01.991657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.887 [2024-11-26 19:42:02.000833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.887 [2024-11-26 19:42:02.000855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.887 [2024-11-26 19:42:02.009993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.887 [2024-11-26 19:42:02.010016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.887 [2024-11-26 19:42:02.019353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.887 [2024-11-26 19:42:02.019378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.887 [2024-11-26 19:42:02.027936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.887 [2024-11-26 19:42:02.027959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.887 [2024-11-26 19:42:02.034623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.887 [2024-11-26 19:42:02.034646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.887 [2024-11-26 19:42:02.050507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.887 [2024-11-26 19:42:02.050532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.887 [2024-11-26 19:42:02.064729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.887 [2024-11-26 19:42:02.064757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.887 [2024-11-26 19:42:02.073747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.887 [2024-11-26 19:42:02.073781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.887 [2024-11-26 19:42:02.082227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.887 [2024-11-26 19:42:02.082250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.887 [2024-11-26 19:42:02.090878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.887 [2024-11-26 19:42:02.090900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.887 [2024-11-26 19:42:02.099955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.887 [2024-11-26 19:42:02.099980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.887 16426.50 IOPS, 128.33 MiB/s [2024-11-26T19:42:02.134Z] [2024-11-26 19:42:02.108616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.887 [2024-11-26 19:42:02.108639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.887 [2024-11-26 19:42:02.117870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.887 [2024-11-26 19:42:02.117893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:06.887 [2024-11-26 19:42:02.126400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:06.887 [2024-11-26 19:42:02.126424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.145 [2024-11-26 19:42:02.135640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.145 [2024-11-26 19:42:02.135667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.145 [2024-11-26 19:42:02.144183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.145 [2024-11-26 19:42:02.144210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.145 [2024-11-26 19:42:02.152850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.145 [2024-11-26 19:42:02.152873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.145 [2024-11-26 19:42:02.162109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.145 [2024-11-26 19:42:02.162131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.145 [2024-11-26 19:42:02.168809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.145 [2024-11-26 19:42:02.168831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.145 [2024-11-26 19:42:02.179931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.145 [2024-11-26 19:42:02.179954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.145 [2024-11-26 19:42:02.194475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.145 [2024-11-26 19:42:02.194501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.145 [2024-11-26 19:42:02.204915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.145 [2024-11-26 19:42:02.204940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.145 [2024-11-26 19:42:02.211638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.145 [2024-11-26 19:42:02.211662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.145 [2024-11-26 19:42:02.222659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.145 [2024-11-26 19:42:02.222684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.145 [2024-11-26 19:42:02.231797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.145 [2024-11-26 19:42:02.231821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.145 [2024-11-26 19:42:02.240427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.145 [2024-11-26 19:42:02.240449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.145 [2024-11-26 19:42:02.249009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.145 [2024-11-26 19:42:02.249030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.145 [2024-11-26 19:42:02.257573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.145 [2024-11-26 19:42:02.257595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.145 [2024-11-26 19:42:02.266858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.145 [2024-11-26 19:42:02.266880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.145 [2024-11-26 19:42:02.276188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.145 [2024-11-26 19:42:02.276210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.145 [2024-11-26 19:42:02.282862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.145 [2024-11-26 19:42:02.282884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.145 [2024-11-26 19:42:02.294104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.145 [2024-11-26 19:42:02.294125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.145 [2024-11-26 19:42:02.300914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.146 [2024-11-26 19:42:02.300936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.146 [2024-11-26 19:42:02.311173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.146 [2024-11-26 19:42:02.311194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.146 [2024-11-26 19:42:02.319933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.146 [2024-11-26 19:42:02.319955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.146 [2024-11-26 19:42:02.334328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.146 [2024-11-26 19:42:02.334354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.146 [2024-11-26 19:42:02.342179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.146 [2024-11-26 19:42:02.342203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.146 [2024-11-26 19:42:02.351569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.146 [2024-11-26 19:42:02.351592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.146 [2024-11-26 19:42:02.360211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.146 [2024-11-26 19:42:02.360235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.146 [2024-11-26 19:42:02.368782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.146 [2024-11-26 19:42:02.368805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.146 [2024-11-26 19:42:02.377353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.146 [2024-11-26 19:42:02.377374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.146 [2024-11-26 19:42:02.384058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.146 [2024-11-26 19:42:02.384082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.404 [2024-11-26 19:42:02.395251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.404 [2024-11-26 19:42:02.395275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.404 [2024-11-26 19:42:02.404100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.404 [2024-11-26 19:42:02.404126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.404 [2024-11-26 19:42:02.413339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.404 [2024-11-26 19:42:02.413362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.404 [2024-11-26 19:42:02.421857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.404 [2024-11-26 19:42:02.421878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.404 [2024-11-26 19:42:02.428602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.404 [2024-11-26 19:42:02.428624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.404 [2024-11-26 19:42:02.439759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.404 [2024-11-26 19:42:02.439791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.404 [2024-11-26 19:42:02.448512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.404 [2024-11-26 19:42:02.448536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.404 [2024-11-26 19:42:02.457199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.404 [2024-11-26 19:42:02.457222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.404 [2024-11-26 19:42:02.465745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.404 [2024-11-26 19:42:02.465777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.405 [2024-11-26 19:42:02.472523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.405 [2024-11-26 19:42:02.472546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.405 [2024-11-26 19:42:02.483645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.405 [2024-11-26 19:42:02.483669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.405 [2024-11-26 19:42:02.492479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.405 [2024-11-26 19:42:02.492502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.405 [2024-11-26 19:42:02.499206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.405 [2024-11-26 19:42:02.499231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.405 [2024-11-26 19:42:02.510174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.405 [2024-11-26 19:42:02.510199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.405 [2024-11-26 19:42:02.519228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.405 [2024-11-26 19:42:02.519250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.405 [2024-11-26 19:42:02.527867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.405 [2024-11-26 19:42:02.527890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.405 [2024-11-26 19:42:02.534580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.405 [2024-11-26 19:42:02.534604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.405 [2024-11-26 19:42:02.545611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.405 [2024-11-26 19:42:02.545633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.405 [2024-11-26 19:42:02.552756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.405 [2024-11-26 19:42:02.552786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.405 [2024-11-26 19:42:02.563271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.405 [2024-11-26 19:42:02.563295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.405 [2024-11-26 19:42:02.571883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.405 [2024-11-26 19:42:02.571904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.405 [2024-11-26 19:42:02.578562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.405 [2024-11-26 19:42:02.578585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.405 [2024-11-26 19:42:02.589643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.405 [2024-11-26 19:42:02.589667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.405 [2024-11-26 19:42:02.598880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.405 [2024-11-26 19:42:02.598901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.405 [2024-11-26 19:42:02.608063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.405 [2024-11-26 19:42:02.608085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.405 [2024-11-26 19:42:02.617142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.405 [2024-11-26 19:42:02.617164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.405 [2024-11-26 19:42:02.625489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.405 [2024-11-26 19:42:02.625510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.405 [2024-11-26 19:42:02.634590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.405 [2024-11-26 19:42:02.634613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.405 [2024-11-26 19:42:02.643635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.405 [2024-11-26 19:42:02.643656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.664 [2024-11-26 19:42:02.652197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.664 [2024-11-26 19:42:02.652220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.664 [2024-11-26 19:42:02.661285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.664 [2024-11-26 19:42:02.661307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.664 [2024-11-26 19:42:02.670459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.664 [2024-11-26 19:42:02.670481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.664 [2024-11-26 19:42:02.679049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.664 [2024-11-26 19:42:02.679072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.664 [2024-11-26 19:42:02.688371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.664 [2024-11-26 19:42:02.688393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.664 [2024-11-26 19:42:02.697678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.664 [2024-11-26 19:42:02.697702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.664 [2024-11-26 19:42:02.706243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.664 [2024-11-26 19:42:02.706264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.664 [2024-11-26 19:42:02.714833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.664 [2024-11-26 19:42:02.714855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.664 [2024-11-26 19:42:02.723351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.664 [2024-11-26 19:42:02.723372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.664 [2024-11-26 19:42:02.731930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.664 [2024-11-26 19:42:02.731950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.664 [2024-11-26 19:42:02.740457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.664 [2024-11-26 19:42:02.740478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.664 [2024-11-26 19:42:02.749637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.664 [2024-11-26 19:42:02.749659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.664 [2024-11-26 19:42:02.758089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.664 [2024-11-26 19:42:02.758112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.664 [2024-11-26 19:42:02.766735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.664 [2024-11-26 19:42:02.766759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.664 [2024-11-26 19:42:02.773482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.664 [2024-11-26 19:42:02.773504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.664 [2024-11-26 19:42:02.784404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.664 [2024-11-26 19:42:02.784427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.664 [2024-11-26 19:42:02.793134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.664 [2024-11-26 19:42:02.793156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.664 [2024-11-26 19:42:02.801605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.664 [2024-11-26 19:42:02.801627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.664 [2024-11-26 19:42:02.810287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.664 [2024-11-26 19:42:02.810310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.664 [2024-11-26 19:42:02.818827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.664 [2024-11-26 19:42:02.818849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.664 [2024-11-26 19:42:02.827343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.664 [2024-11-26 19:42:02.827364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.664 [2024-11-26 19:42:02.835869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.664 [2024-11-26 19:42:02.835891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.664 [2024-11-26 19:42:02.844427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.664 [2024-11-26 19:42:02.844449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.664 [2024-11-26 19:42:02.853453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.664 [2024-11-26 19:42:02.853475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.664 [2024-11-26 19:42:02.861934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.664 [2024-11-26 19:42:02.861956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.664 [2024-11-26 19:42:02.870939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.664 [2024-11-26 19:42:02.870960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.664 [2024-11-26 19:42:02.879438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.664 [2024-11-26 19:42:02.879460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.664 [2024-11-26 19:42:02.888641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.664 [2024-11-26 19:42:02.888663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.664 [2024-11-26 19:42:02.897330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.664 [2024-11-26 19:42:02.897353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.664 [2024-11-26 19:42:02.905915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.664 [2024-11-26 19:42:02.905937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.921 [2024-11-26 19:42:02.912957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.921 [2024-11-26 19:42:02.912979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.921 [2024-11-26 19:42:02.924116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.921 [2024-11-26 19:42:02.924138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.921 [2024-11-26 19:42:02.933450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.921 [2024-11-26 19:42:02.933472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.921 [2024-11-26 19:42:02.942558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.921 [2024-11-26 19:42:02.942580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.921 [2024-11-26 19:42:02.951172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.921 [2024-11-26 19:42:02.951194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.921 [2024-11-26 19:42:02.959735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.921 [2024-11-26 19:42:02.959757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.921 [2024-11-26 19:42:02.968205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.921 [2024-11-26 19:42:02.968227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.921 [2024-11-26 19:42:02.977369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.921 [2024-11-26 19:42:02.977391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.921 [2024-11-26 19:42:02.984070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.922 [2024-11-26 19:42:02.984092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.922 [2024-11-26 19:42:02.995055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.922 [2024-11-26 19:42:02.995076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.922 [2024-11-26 19:42:03.001819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.922 [2024-11-26 19:42:03.001841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.922 [2024-11-26 19:42:03.012757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.922 [2024-11-26 19:42:03.012785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.922 [2024-11-26 19:42:03.021344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.922 [2024-11-26 19:42:03.021366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.922 [2024-11-26 19:42:03.030387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.922 [2024-11-26 19:42:03.030409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.922 [2024-11-26 19:42:03.039567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.922 [2024-11-26 19:42:03.039589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.922 [2024-11-26 19:42:03.048827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.922 [2024-11-26 19:42:03.048849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.922 [2024-11-26 19:42:03.057308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.922 [2024-11-26 19:42:03.057330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.922 [2024-11-26 19:42:03.066431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.922 [2024-11-26 19:42:03.066452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.922 [2024-11-26 19:42:03.073089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.922 [2024-11-26 19:42:03.073110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.922 [2024-11-26 19:42:03.083220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.922 [2024-11-26 19:42:03.083243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.922 [2024-11-26 19:42:03.098154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.922 [2024-11-26 19:42:03.098180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.922 16469.67 IOPS, 128.67 MiB/s [2024-11-26T19:42:03.169Z] [2024-11-26 19:42:03.108920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.922 [2024-11-26 19:42:03.108946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.922 [2024-11-26 19:42:03.115727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.922 [2024-11-26 19:42:03.115751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.922 [2024-11-26 19:42:03.126774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.922 [2024-11-26 19:42:03.126796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.922 [2024-11-26 19:42:03.135579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.922 [2024-11-26 19:42:03.135601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.922 [2024-11-26 19:42:03.144231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.922 [2024-11-26 19:42:03.144254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.922 [2024-11-26 19:42:03.158868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.922 [2024-11-26 19:42:03.158890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.179 [2024-11-26 19:42:03.167349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.179 [2024-11-26 19:42:03.167371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.179 [2024-11-26 19:42:03.175975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.179 [2024-11-26 19:42:03.175999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.179 [2024-11-26 19:42:03.185192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.179 [2024-11-26 19:42:03.185216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.179 [2024-11-26 19:42:03.193867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.179 [2024-11-26 19:42:03.193889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.179 [2024-11-26 19:42:03.203109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.179 [2024-11-26 19:42:03.203130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.179 [2024-11-26 19:42:03.212219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.179 [2024-11-26 19:42:03.212243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.179 [2024-11-26 19:42:03.220642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.179 [2024-11-26 19:42:03.220663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.179 [2024-11-26 19:42:03.229155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.179 [2024-11-26 19:42:03.229177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.179 [2024-11-26 19:42:03.238426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.179 [2024-11-26 19:42:03.238449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.179 [2024-11-26 19:42:03.247705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.179 [2024-11-26 19:42:03.247727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.179 [2024-11-26 19:42:03.257036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.179 [2024-11-26 19:42:03.257058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.179 [2024-11-26 19:42:03.266284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.179 [2024-11-26 19:42:03.266306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.179 [2024-11-26 19:42:03.274847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.179 [2024-11-26 19:42:03.274868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.179 [2024-11-26 19:42:03.284179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.179 [2024-11-26 19:42:03.284201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.179 [2024-11-26 19:42:03.292658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.179 [2024-11-26 19:42:03.292681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.179 [2024-11-26 19:42:03.301261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.179 [2024-11-26 19:42:03.301283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.179 [2024-11-26 19:42:03.307957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.179 [2024-11-26 19:42:03.307978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.179 [2024-11-26 19:42:03.319156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.179 [2024-11-26 19:42:03.319178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.179 [2024-11-26 19:42:03.328209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.180 [2024-11-26 19:42:03.328231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.180 [2024-11-26 19:42:03.334915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.180 [2024-11-26 19:42:03.334937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.180 [2024-11-26 19:42:03.345205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.180 [2024-11-26 19:42:03.345228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.180 [2024-11-26 19:42:03.354225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.180 [2024-11-26 19:42:03.354249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.180 [2024-11-26 19:42:03.362760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.180 [2024-11-26 19:42:03.362791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.180 [2024-11-26 19:42:03.371228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.180 [2024-11-26 19:42:03.371250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.180 [2024-11-26 19:42:03.379917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.180 [2024-11-26 19:42:03.379939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.180 [2024-11-26 19:42:03.389103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.180 [2024-11-26 19:42:03.389125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.180 [2024-11-26 19:42:03.398510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.180 [2024-11-26 19:42:03.398532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.180 [2024-11-26 19:42:03.407849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.180 [2024-11-26 19:42:03.407870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.180 [2024-11-26 19:42:03.417061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.180 [2024-11-26 19:42:03.417083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.438 [2024-11-26 19:42:03.425726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.438 [2024-11-26 19:42:03.425749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.438 [2024-11-26 19:42:03.434941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.438 [2024-11-26 19:42:03.434963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.438 [2024-11-26 19:42:03.443414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.438 [2024-11-26 19:42:03.443437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.438 [2024-11-26 19:42:03.452028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.438 [2024-11-26 19:42:03.452053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.438 [2024-11-26 19:42:03.461332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.438 [2024-11-26 19:42:03.461355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.438 [2024-11-26 19:42:03.470572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.438 [2024-11-26 19:42:03.470594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.438 [2024-11-26 19:42:03.479163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.438 [2024-11-26 19:42:03.479185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.438 [2024-11-26 19:42:03.488407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.438 [2024-11-26 19:42:03.488430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.438 [2024-11-26 19:42:03.496904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.438 [2024-11-26 19:42:03.496926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.438 [2024-11-26 19:42:03.506212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.438 [2024-11-26 19:42:03.506235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.438 [2024-11-26 19:42:03.514774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.438 [2024-11-26 19:42:03.514795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.438 [2024-11-26 19:42:03.521535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.438 [2024-11-26 19:42:03.521558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.438 [2024-11-26 19:42:03.531828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.438 [2024-11-26 19:42:03.531849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.438 [2024-11-26 19:42:03.540639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.438 [2024-11-26 19:42:03.540661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.438 [2024-11-26 19:42:03.547374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.438 [2024-11-26 19:42:03.547395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.438 [2024-11-26 19:42:03.558279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.438 [2024-11-26 19:42:03.558302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.438 [2024-11-26 19:42:03.566881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.438 [2024-11-26 19:42:03.566903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.438 [2024-11-26 19:42:03.575477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.438 [2024-11-26 19:42:03.575499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.438 [2024-11-26 19:42:03.584780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.438 [2024-11-26 19:42:03.584802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.438 [2024-11-26 19:42:03.593314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.438 [2024-11-26 19:42:03.593336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.438 [2024-11-26 19:42:03.601860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.438 [2024-11-26 19:42:03.601882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.438 [2024-11-26 19:42:03.611163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.438 [2024-11-26 19:42:03.611185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.438 [2024-11-26 19:42:03.619885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.438 [2024-11-26 19:42:03.619908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.438 [2024-11-26 19:42:03.628582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.438 [2024-11-26 19:42:03.628604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.438 [2024-11-26 19:42:03.637937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.438 [2024-11-26 19:42:03.637959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.438 [2024-11-26 19:42:03.647095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.438 [2024-11-26 19:42:03.647117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.438 [2024-11-26 19:42:03.655681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.438 [2024-11-26 19:42:03.655702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.438 [2024-11-26 19:42:03.664781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.438 [2024-11-26 19:42:03.664802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.438 [2024-11-26 19:42:03.673382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.438 [2024-11-26 19:42:03.673405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.438 [2024-11-26 19:42:03.681950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.438 [2024-11-26 19:42:03.681972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.696 [2024-11-26 19:42:03.690496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.696 [2024-11-26 19:42:03.690518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.696 [2024-11-26 19:42:03.697200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.696 [2024-11-26 19:42:03.697222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.696 [2024-11-26 19:42:03.707585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.696 [2024-11-26 19:42:03.707607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.696 [2024-11-26 19:42:03.716369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.696 [2024-11-26 19:42:03.716392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.696 [2024-11-26 19:42:03.724946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.696 [2024-11-26 19:42:03.724967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.696 [2024-11-26 19:42:03.739861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.696 [2024-11-26 19:42:03.739883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.696 [2024-11-26 19:42:03.750945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.696 [2024-11-26 19:42:03.750967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.696 [2024-11-26 19:42:03.759542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.696 [2024-11-26 19:42:03.759564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.696 [2024-11-26 19:42:03.768123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.696 [2024-11-26 19:42:03.768145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.696 [2024-11-26 19:42:03.777423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.696 [2024-11-26 19:42:03.777445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.696 [2024-11-26 19:42:03.786009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.696 [2024-11-26 19:42:03.786032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.696 [2024-11-26 19:42:03.794632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.696 [2024-11-26 19:42:03.794654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.696 [2024-11-26 19:42:03.803184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.696 [2024-11-26 19:42:03.803206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.696 [2024-11-26 19:42:03.811778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.696 [2024-11-26 19:42:03.811800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.696 [2024-11-26 19:42:03.820404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.696 [2024-11-26 19:42:03.820426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.696 [2024-11-26 19:42:03.828927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.696 [2024-11-26 19:42:03.828949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.696 [2024-11-26 19:42:03.838191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.696 [2024-11-26 19:42:03.838214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.696 [2024-11-26 19:42:03.847304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.696 [2024-11-26 19:42:03.847326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.696 [2024-11-26 19:42:03.855760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.696 [2024-11-26 19:42:03.855789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.696 [2024-11-26 19:42:03.864313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.696 [2024-11-26 19:42:03.864335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.696 [2024-11-26 19:42:03.872836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.696 [2024-11-26 19:42:03.872859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.696 [2024-11-26 19:42:03.881466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.696 [2024-11-26 19:42:03.881489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.696 [2024-11-26 19:42:03.890142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.696 [2024-11-26 19:42:03.890165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.696 [2024-11-26 19:42:03.898729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.696 [2024-11-26 19:42:03.898750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.696 [2024-11-26 19:42:03.907956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.696 [2024-11-26 19:42:03.907979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.696 [2024-11-26 19:42:03.917109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.696 [2024-11-26 19:42:03.917131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.696 [2024-11-26 19:42:03.925823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.696 [2024-11-26 19:42:03.925845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.696 [2024-11-26 19:42:03.932535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.696 [2024-11-26 19:42:03.932557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.955 [2024-11-26 19:42:03.943719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.955 [2024-11-26 19:42:03.943742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.955 [2024-11-26 19:42:03.952561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.955 [2024-11-26 19:42:03.952586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.955 [2024-11-26 19:42:03.961176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.955 [2024-11-26 19:42:03.961202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.955 [2024-11-26 19:42:03.969820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.955 [2024-11-26 19:42:03.969843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.955 [2024-11-26 19:42:03.979038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.955 [2024-11-26 19:42:03.979060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.955 [2024-11-26 19:42:03.987622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.955 [2024-11-26 19:42:03.987644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.955 [2024-11-26 19:42:03.996643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.955 [2024-11-26 19:42:03.996666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.955 [2024-11-26 19:42:04.005042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.955 [2024-11-26 19:42:04.005064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.955 [2024-11-26 19:42:04.011723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.955 [2024-11-26 19:42:04.011745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.955 [2024-11-26 19:42:04.022734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.955 [2024-11-26 19:42:04.022755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.955 [2024-11-26 19:42:04.031535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.955 [2024-11-26 19:42:04.031557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.955 [2024-11-26 19:42:04.040152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.955 [2024-11-26 19:42:04.040174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.955 [2024-11-26 19:42:04.046905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.955 [2024-11-26 19:42:04.046927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.955 [2024-11-26 19:42:04.057897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.955 [2024-11-26 19:42:04.057920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.955 [2024-11-26 19:42:04.066989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.955 [2024-11-26 19:42:04.067011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.955 [2024-11-26 19:42:04.075626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.955 [2024-11-26 19:42:04.075648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.955 [2024-11-26 19:42:04.084168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.955 [2024-11-26 19:42:04.084190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.955 [2024-11-26 19:42:04.093578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.955 [2024-11-26 19:42:04.093601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.956 [2024-11-26 19:42:04.102532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.956 [2024-11-26 19:42:04.102555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.956 16498.00 IOPS, 128.89 MiB/s [2024-11-26T19:42:04.203Z] [2024-11-26 19:42:04.111464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.956 [2024-11-26 19:42:04.111486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.956 [2024-11-26 19:42:04.119958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.956 [2024-11-26 19:42:04.119979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.956 [2024-11-26 19:42:04.129267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.956 [2024-11-26 19:42:04.129290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.956 [2024-11-26 19:42:04.137787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.956 [2024-11-26 19:42:04.137808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.956 [2024-11-26 19:42:04.146468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.956 [2024-11-26 19:42:04.146492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.956 [2024-11-26 19:42:04.155164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.956 [2024-11-26 19:42:04.155187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.956 [2024-11-26 19:42:04.169641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.956 [2024-11-26 19:42:04.169667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.956 [2024-11-26 19:42:04.178560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.956 [2024-11-26 19:42:04.178584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.956 [2024-11-26 19:42:04.187276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.956 [2024-11-26 19:42:04.187297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.956 [2024-11-26 19:42:04.196529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.956 [2024-11-26 19:42:04.196550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.214 [2024-11-26 19:42:04.205100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.214 [2024-11-26 19:42:04.205121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.214 [2024-11-26 19:42:04.213762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.214 [2024-11-26 19:42:04.213794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.214 [2024-11-26 19:42:04.222350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.214 [2024-11-26 19:42:04.222373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.214 [2024-11-26 19:42:04.231518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.214 [2024-11-26 19:42:04.231540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.214 [2024-11-26 19:42:04.239847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.214 [2024-11-26 19:42:04.239870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.214 [2024-11-26 19:42:04.248890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.214 [2024-11-26 19:42:04.248913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.214 [2024-11-26 19:42:04.257955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.214 [2024-11-26 19:42:04.257977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.214 [2024-11-26 19:42:04.266367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.214 [2024-11-26 19:42:04.266389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.214 [2024-11-26 19:42:04.274686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.214 [2024-11-26 19:42:04.274708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.214 [2024-11-26 19:42:04.283067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.214 [2024-11-26 19:42:04.283087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.214 [2024-11-26 19:42:04.292119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.214 [2024-11-26 19:42:04.292140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.214 [2024-11-26 19:42:04.298802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.214 [2024-11-26 19:42:04.298823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.214 [2024-11-26 19:42:04.309964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.214 [2024-11-26 19:42:04.309986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.214 [2024-11-26 19:42:04.318941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.214 [2024-11-26 19:42:04.318962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.214 [2024-11-26 19:42:04.327934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.215 [2024-11-26 19:42:04.327956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.215 [2024-11-26 19:42:04.337042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.215 [2024-11-26 19:42:04.337063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.215 [2024-11-26 19:42:04.345535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.215 [2024-11-26 19:42:04.345557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.215 [2024-11-26 19:42:04.354148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.215 [2024-11-26 19:42:04.354170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.215 [2024-11-26 19:42:04.362703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.215 [2024-11-26 19:42:04.362737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.215 [2024-11-26 19:42:04.371969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.215 [2024-11-26 19:42:04.371996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.215 [2024-11-26 19:42:04.380562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.215 [2024-11-26 19:42:04.380587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.215 [2024-11-26 19:42:04.389673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.215 [2024-11-26 19:42:04.389696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.215 [2024-11-26 19:42:04.396412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.215 [2024-11-26 19:42:04.396434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.215 [2024-11-26 19:42:04.407381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.215 [2024-11-26 19:42:04.407403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.215 [2024-11-26 19:42:04.416002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.215 [2024-11-26 19:42:04.416024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.215 [2024-11-26 19:42:04.424608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.215 [2024-11-26 19:42:04.424630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.215 [2024-11-26 19:42:04.433895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.215 [2024-11-26 19:42:04.433918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.215 [2024-11-26 19:42:04.442429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.215 [2024-11-26 19:42:04.442453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.215 [2024-11-26 19:42:04.450849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.215 [2024-11-26 19:42:04.450872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.473 [2024-11-26 19:42:04.460005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.473 [2024-11-26 19:42:04.460029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.473 [2024-11-26 19:42:04.468431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.473 [2024-11-26 19:42:04.468455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.473 [2024-11-26 19:42:04.476992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.473 [2024-11-26 19:42:04.477015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.473 [2024-11-26 19:42:04.483723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.473 [2024-11-26 19:42:04.483745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.473 [2024-11-26 19:42:04.494780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.473 [2024-11-26 19:42:04.494802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.473 [2024-11-26 19:42:04.503362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.473 [2024-11-26 19:42:04.503385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.473 [2024-11-26 19:42:04.512653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.473 [2024-11-26 19:42:04.512676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.473 [2024-11-26 19:42:04.521225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.473 [2024-11-26 19:42:04.521249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.473 [2024-11-26 19:42:04.530670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.473 [2024-11-26 19:42:04.530693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.473 [2024-11-26 19:42:04.537418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.473 [2024-11-26 19:42:04.537441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.473 [2024-11-26 19:42:04.548472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.473 [2024-11-26 19:42:04.548495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.473 [2024-11-26 19:42:04.555582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.473 [2024-11-26 19:42:04.555604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.473 [2024-11-26 19:42:04.565734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.473 [2024-11-26 19:42:04.565764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.473 [2024-11-26 19:42:04.574546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.473 [2024-11-26 19:42:04.574569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.473 [2024-11-26 19:42:04.583155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.473 [2024-11-26 19:42:04.583177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.473 [2024-11-26 19:42:04.592452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.473 [2024-11-26 19:42:04.592477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.473 [2024-11-26 19:42:04.601500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.473 [2024-11-26 19:42:04.601525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.473 [2024-11-26 19:42:04.610716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.473 [2024-11-26 19:42:04.610745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.473 [2024-11-26 19:42:04.619264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.473 [2024-11-26 19:42:04.619285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.473 [2024-11-26 19:42:04.627899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.473 [2024-11-26 19:42:04.627920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.473 [2024-11-26 19:42:04.637032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.473 [2024-11-26 19:42:04.637055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.473 [2024-11-26 19:42:04.645579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.473 [2024-11-26 19:42:04.645602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.473 [2024-11-26 19:42:04.654150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.473 [2024-11-26 19:42:04.654174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.473 [2024-11-26 19:42:04.662944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.473 [2024-11-26 19:42:04.662972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.473 [2024-11-26 19:42:04.671479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.473 [2024-11-26 19:42:04.671501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.473 [2024-11-26 19:42:04.678326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.473 [2024-11-26 19:42:04.678350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.473 [2024-11-26 19:42:04.689432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.473 [2024-11-26 19:42:04.689457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.473 [2024-11-26 19:42:04.698019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.473 [2024-11-26 19:42:04.698042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.473 [2024-11-26 19:42:04.706510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.473 [2024-11-26 19:42:04.706533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.473 [2024-11-26 19:42:04.715786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.473 [2024-11-26 19:42:04.715807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.731 [2024-11-26 19:42:04.722451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.731 [2024-11-26 19:42:04.722474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.731 [2024-11-26 19:42:04.733665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.731 [2024-11-26 19:42:04.733688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.731 [2024-11-26 19:42:04.748174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.731 [2024-11-26 19:42:04.748198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.731 [2024-11-26 19:42:04.757651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.731 [2024-11-26 19:42:04.757674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.731 [2024-11-26 19:42:04.766779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.731 [2024-11-26 19:42:04.766800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.731 [2024-11-26 19:42:04.776059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.731 [2024-11-26 19:42:04.776082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.731 [2024-11-26 19:42:04.790664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.731 [2024-11-26 19:42:04.790694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.731 [2024-11-26 19:42:04.798174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.731 [2024-11-26 19:42:04.798197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.731 [2024-11-26 19:42:04.806974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.731 [2024-11-26 19:42:04.806996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.731 [2024-11-26 19:42:04.815527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.731 [2024-11-26 19:42:04.815550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.731 [2024-11-26 19:42:04.830407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.731 [2024-11-26 19:42:04.830430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.731 [2024-11-26 19:42:04.837683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.731 [2024-11-26 19:42:04.837705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.731 [2024-11-26 19:42:04.845545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.731 [2024-11-26 19:42:04.845569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.731 [2024-11-26 19:42:04.854741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.731 [2024-11-26 19:42:04.854763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.731 [2024-11-26 19:42:04.863518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.732 [2024-11-26 19:42:04.863539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.732 [2024-11-26 19:42:04.872721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.732 [2024-11-26 19:42:04.872744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.732 [2024-11-26 19:42:04.881251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.732 [2024-11-26 19:42:04.881277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.732 [2024-11-26 19:42:04.889926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.732 [2024-11-26 19:42:04.889948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.732 [2024-11-26 19:42:04.896658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.732 [2024-11-26 19:42:04.896681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.732 [2024-11-26 19:42:04.907827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.732 [2024-11-26 19:42:04.907851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.732 [2024-11-26 19:42:04.916545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.732 [2024-11-26 19:42:04.916567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.732 [2024-11-26 19:42:04.925797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.732 [2024-11-26 19:42:04.925819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.732 [2024-11-26 19:42:04.935074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.732 [2024-11-26 19:42:04.935097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.732 [2024-11-26 19:42:04.943664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.732 [2024-11-26 19:42:04.943688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.732 [2024-11-26 19:42:04.952133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.732 [2024-11-26 19:42:04.952154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.732 [2024-11-26 19:42:04.960749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.732 [2024-11-26 19:42:04.960779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.732 [2024-11-26 19:42:04.969245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.732 [2024-11-26 19:42:04.969266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.732 [2024-11-26 19:42:04.976009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.732 [2024-11-26 19:42:04.976029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.991 [2024-11-26 19:42:04.987069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.991 [2024-11-26 19:42:04.987091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.991 [2024-11-26 19:42:04.994305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.991 [2024-11-26 19:42:04.994327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.991 [2024-11-26 19:42:05.004785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.991 [2024-11-26 19:42:05.004806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.991 [2024-11-26 19:42:05.012159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.991 [2024-11-26 19:42:05.012181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.991 [2024-11-26 19:42:05.022383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.991 [2024-11-26 19:42:05.022405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.991 [2024-11-26 19:42:05.031089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.991 [2024-11-26 19:42:05.031112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.991 [2024-11-26 19:42:05.040291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.991 [2024-11-26 19:42:05.040314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.991 [2024-11-26 19:42:05.049501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.991 [2024-11-26 19:42:05.049523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.991 [2024-11-26 19:42:05.058009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.991 [2024-11-26 19:42:05.058031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.991 [2024-11-26 19:42:05.067308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.991 [2024-11-26 19:42:05.067331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.991 [2024-11-26 19:42:05.076535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.991 [2024-11-26 19:42:05.076558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.991 [2024-11-26 19:42:05.085122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.991 [2024-11-26 19:42:05.085144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.991 [2024-11-26 19:42:05.093700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.991 [2024-11-26 19:42:05.093725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.991 [2024-11-26 19:42:05.102190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.991 [2024-11-26 19:42:05.102213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.991 16517.20 IOPS, 129.04 MiB/s [2024-11-26T19:42:05.238Z] [2024-11-26 19:42:05.108506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.991 [2024-11-26 19:42:05.108528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.991 00:08:09.991 Latency(us) 00:08:09.991 [2024-11-26T19:42:05.238Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:09.991 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:09.991 Nvme1n1 : 5.01 16519.20 129.06 0.00 0.00 7742.41 3062.55 17543.48 00:08:09.991 [2024-11-26T19:42:05.238Z] =================================================================================================================== 00:08:09.991 [2024-11-26T19:42:05.238Z] Total : 16519.20 129.06 0.00 0.00 7742.41 3062.55 17543.48 00:08:09.991 [2024-11-26 19:42:05.116505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.991 [2024-11-26 19:42:05.116527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.991 [2024-11-26 19:42:05.124502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.991 [2024-11-26 19:42:05.124522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.991 [2024-11-26 19:42:05.132507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.991 [2024-11-26 19:42:05.132528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.991 [2024-11-26 19:42:05.140504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.991 [2024-11-26 19:42:05.140525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.991 [2024-11-26 19:42:05.148507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.991 [2024-11-26 19:42:05.148528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.991 [2024-11-26 19:42:05.156512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.991 [2024-11-26 19:42:05.156537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.991 [2024-11-26 19:42:05.164524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.991 [2024-11-26 19:42:05.164548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.991 [2024-11-26 19:42:05.172512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.991 [2024-11-26 19:42:05.172531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.991 [2024-11-26 19:42:05.180513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.991 [2024-11-26 19:42:05.180531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.991 [2024-11-26 19:42:05.188515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.991 [2024-11-26 19:42:05.188533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.991 [2024-11-26 19:42:05.196517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.991 [2024-11-26 19:42:05.196537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.991 [2024-11-26 19:42:05.204517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.991 [2024-11-26 19:42:05.204534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.991 [2024-11-26 19:42:05.212517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.991 [2024-11-26 19:42:05.212534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.991 [2024-11-26 19:42:05.220521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.991 [2024-11-26 19:42:05.220538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.991 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (64421) - No such process 00:08:09.991 19:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 64421 00:08:09.991 19:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.991 19:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.991 19:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:09.991 19:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.991 19:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:09.991 19:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.991 19:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:10.249 delay0 00:08:10.249 19:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.249 19:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:10.249 19:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.249 19:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:10.249 19:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.249 19:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:08:10.249 [2024-11-26 19:42:05.451457] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:16.804 Initializing NVMe Controllers 00:08:16.804 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:08:16.804 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:16.804 Initialization complete. Launching workers. 00:08:16.804 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 269, failed: 19503 00:08:16.804 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 19672, failed to submit 100 00:08:16.804 success 19604, unsuccessful 68, failed 0 00:08:16.804 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:16.804 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:16.804 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:16.804 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:08:16.804 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:16.804 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:08:16.804 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:16.804 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:16.804 rmmod nvme_tcp 00:08:16.804 rmmod nvme_fabrics 00:08:16.804 rmmod nvme_keyring 00:08:16.804 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:16.804 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:08:16.804 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:08:16.804 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 64265 ']' 00:08:16.804 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 64265 00:08:16.804 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 64265 ']' 00:08:16.804 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 64265 00:08:16.804 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:08:16.804 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:16.804 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64265 00:08:16.804 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:16.804 killing process with pid 64265 00:08:16.804 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:16.804 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64265' 00:08:16.804 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 64265 00:08:16.804 19:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 64265 00:08:16.804 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:16.804 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:16.804 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:16.804 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:08:16.804 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:08:16.804 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:16.804 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:08:16.804 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:16.804 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:16.804 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:16.804 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:16.804 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:17.062 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:17.062 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:17.062 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:17.062 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:17.062 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:17.062 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:17.062 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:17.062 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:17.062 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:17.062 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:17.062 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:17.062 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.062 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:17.062 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.062 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:08:17.062 00:08:17.062 real 0m24.467s 00:08:17.062 user 0m41.506s 00:08:17.062 sys 0m5.258s 00:08:17.062 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:17.062 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:17.062 ************************************ 00:08:17.062 END TEST nvmf_zcopy 00:08:17.062 ************************************ 00:08:17.062 19:42:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:17.062 19:42:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:17.062 19:42:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:17.062 19:42:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:17.062 ************************************ 00:08:17.062 START TEST nvmf_nmic 00:08:17.062 ************************************ 00:08:17.062 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:17.321 * Looking for test storage... 00:08:17.321 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:17.321 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:17.321 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:08:17.321 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:17.321 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:17.321 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:17.321 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:17.321 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:17.321 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:17.321 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:17.321 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:17.321 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:17.321 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:17.321 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:17.321 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:17.321 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:17.321 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:17.321 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:17.321 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:17.321 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:17.321 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:17.321 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:17.321 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:17.321 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:17.321 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:17.321 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:17.321 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:17.321 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:17.321 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:17.321 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:17.321 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:17.321 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:17.321 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:17.321 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:17.321 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:17.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.321 --rc genhtml_branch_coverage=1 00:08:17.321 --rc genhtml_function_coverage=1 00:08:17.321 --rc genhtml_legend=1 00:08:17.321 --rc geninfo_all_blocks=1 00:08:17.321 --rc geninfo_unexecuted_blocks=1 00:08:17.321 00:08:17.321 ' 00:08:17.321 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:17.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.321 --rc genhtml_branch_coverage=1 00:08:17.321 --rc genhtml_function_coverage=1 00:08:17.321 --rc genhtml_legend=1 00:08:17.321 --rc geninfo_all_blocks=1 00:08:17.322 --rc geninfo_unexecuted_blocks=1 00:08:17.322 00:08:17.322 ' 00:08:17.322 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:17.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.322 --rc genhtml_branch_coverage=1 00:08:17.322 --rc genhtml_function_coverage=1 00:08:17.322 --rc genhtml_legend=1 00:08:17.322 --rc geninfo_all_blocks=1 00:08:17.322 --rc geninfo_unexecuted_blocks=1 00:08:17.322 00:08:17.322 ' 00:08:17.322 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:17.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.322 --rc genhtml_branch_coverage=1 00:08:17.322 --rc genhtml_function_coverage=1 00:08:17.322 --rc genhtml_legend=1 00:08:17.322 --rc geninfo_all_blocks=1 00:08:17.322 --rc geninfo_unexecuted_blocks=1 00:08:17.322 00:08:17.322 ' 00:08:17.322 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:17.322 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:17.322 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:17.322 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:17.322 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:17.322 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:17.322 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:17.322 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:17.322 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:17.322 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:17.322 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:17.322 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:17.322 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:08:17.322 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=91838eb1-5852-43eb-90b2-09876f360ab2 00:08:17.322 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:17.322 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:17.322 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:17.322 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:17.322 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:17.322 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:17.322 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:17.322 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:17.322 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:17.322 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.322 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.322 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.322 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:17.322 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.322 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:08:17.322 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:17.322 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:17.322 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:17.322 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:17.322 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:17.322 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:17.322 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:17.322 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:17.322 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:17.322 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:17.322 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:17.322 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:17.322 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:17.322 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:17.322 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:17.323 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:17.323 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:17.323 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:17.323 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.323 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:17.323 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.323 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:17.323 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:17.323 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:17.323 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:17.323 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:17.323 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:17.323 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:17.323 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:17.323 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:17.323 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:17.323 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:17.323 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:17.323 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:17.323 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:17.323 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:17.323 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:17.323 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:17.323 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:17.323 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:17.323 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:17.323 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:17.323 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:17.323 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:17.323 Cannot find device "nvmf_init_br" 00:08:17.323 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:08:17.323 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:17.323 Cannot find device "nvmf_init_br2" 00:08:17.323 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:08:17.323 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:17.323 Cannot find device "nvmf_tgt_br" 00:08:17.323 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:08:17.323 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:17.323 Cannot find device "nvmf_tgt_br2" 00:08:17.323 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:08:17.323 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:17.323 Cannot find device "nvmf_init_br" 00:08:17.323 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:08:17.323 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:17.323 Cannot find device "nvmf_init_br2" 00:08:17.323 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:08:17.323 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:17.323 Cannot find device "nvmf_tgt_br" 00:08:17.323 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:08:17.323 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:17.323 Cannot find device "nvmf_tgt_br2" 00:08:17.323 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:08:17.323 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:17.323 Cannot find device "nvmf_br" 00:08:17.323 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:08:17.323 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:17.323 Cannot find device "nvmf_init_if" 00:08:17.323 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:08:17.323 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:17.323 Cannot find device "nvmf_init_if2" 00:08:17.323 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:08:17.323 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:17.323 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:17.323 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:08:17.323 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:17.323 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:17.323 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:08:17.323 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:17.582 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:17.582 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:08:17.582 00:08:17.582 --- 10.0.0.3 ping statistics --- 00:08:17.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.582 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:17.582 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:17.582 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:08:17.582 00:08:17.582 --- 10.0.0.4 ping statistics --- 00:08:17.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.582 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:17.582 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:17.582 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:08:17.582 00:08:17.582 --- 10.0.0.1 ping statistics --- 00:08:17.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.582 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:17.582 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:17.582 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:08:17.582 00:08:17.582 --- 10.0.0.2 ping statistics --- 00:08:17.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.582 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=64791 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 64791 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 64791 ']' 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:17.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:17.582 19:42:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:17.582 [2024-11-26 19:42:12.775711] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:08:17.582 [2024-11-26 19:42:12.775792] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:17.840 [2024-11-26 19:42:12.920874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:17.840 [2024-11-26 19:42:12.957962] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:17.840 [2024-11-26 19:42:12.958004] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:17.840 [2024-11-26 19:42:12.958011] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:17.840 [2024-11-26 19:42:12.958016] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:17.840 [2024-11-26 19:42:12.958020] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:17.840 [2024-11-26 19:42:12.958763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.840 [2024-11-26 19:42:12.958805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:17.840 [2024-11-26 19:42:12.958896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:17.840 [2024-11-26 19:42:12.958900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.840 [2024-11-26 19:42:12.990596] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:18.786 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:18.786 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:08:18.786 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:18.786 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:18.786 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:18.786 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:18.786 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:18.786 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.786 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:18.786 [2024-11-26 19:42:13.724895] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:18.786 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.786 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:18.786 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.786 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:18.786 Malloc0 00:08:18.786 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.786 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:18.786 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.786 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:18.786 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.786 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:18.786 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.786 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:18.786 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.786 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:18.786 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.786 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:18.786 [2024-11-26 19:42:13.780479] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:18.786 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.786 test case1: single bdev can't be used in multiple subsystems 00:08:18.786 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:18.786 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:18.786 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.786 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:18.786 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.786 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:08:18.786 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.786 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:18.786 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.786 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:18.786 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:18.786 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.786 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:18.786 [2024-11-26 19:42:13.804377] bdev.c:8323:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:18.786 [2024-11-26 19:42:13.804501] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:18.786 [2024-11-26 19:42:13.804511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.786 request: 00:08:18.786 { 00:08:18.786 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:18.786 "namespace": { 00:08:18.786 "bdev_name": "Malloc0", 00:08:18.786 "no_auto_visible": false 00:08:18.786 }, 00:08:18.786 "method": "nvmf_subsystem_add_ns", 00:08:18.786 "req_id": 1 00:08:18.786 } 00:08:18.786 Got JSON-RPC error response 00:08:18.786 response: 00:08:18.786 { 00:08:18.786 "code": -32602, 00:08:18.786 "message": "Invalid parameters" 00:08:18.786 } 00:08:18.786 Adding namespace failed - expected result. 00:08:18.787 test case2: host connect to nvmf target in multiple paths 00:08:18.787 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:18.787 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:18.787 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:18.787 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:18.787 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:18.787 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:08:18.787 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.787 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:18.787 [2024-11-26 19:42:13.816469] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:08:18.787 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.787 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid=91838eb1-5852-43eb-90b2-09876f360ab2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:08:18.787 19:42:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid=91838eb1-5852-43eb-90b2-09876f360ab2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:08:19.044 19:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:19.044 19:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:08:19.044 19:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:19.044 19:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:08:19.044 19:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:08:20.953 19:42:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:20.953 19:42:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:20.953 19:42:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:20.953 19:42:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:08:20.953 19:42:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:20.953 19:42:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:08:20.953 19:42:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:20.953 [global] 00:08:20.953 thread=1 00:08:20.953 invalidate=1 00:08:20.953 rw=write 00:08:20.953 time_based=1 00:08:20.953 runtime=1 00:08:20.953 ioengine=libaio 00:08:20.953 direct=1 00:08:20.954 bs=4096 00:08:20.954 iodepth=1 00:08:20.954 norandommap=0 00:08:20.954 numjobs=1 00:08:20.954 00:08:20.954 verify_dump=1 00:08:20.954 verify_backlog=512 00:08:20.954 verify_state_save=0 00:08:20.954 do_verify=1 00:08:20.954 verify=crc32c-intel 00:08:20.954 [job0] 00:08:20.954 filename=/dev/nvme0n1 00:08:20.954 Could not set queue depth (nvme0n1) 00:08:21.213 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:21.213 fio-3.35 00:08:21.213 Starting 1 thread 00:08:22.149 00:08:22.149 job0: (groupid=0, jobs=1): err= 0: pid=64883: Tue Nov 26 19:42:17 2024 00:08:22.149 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:08:22.149 slat (nsec): min=5357, max=70232, avg=6929.32, stdev=3629.96 00:08:22.149 clat (usec): min=104, max=545, avg=152.94, stdev=25.15 00:08:22.149 lat (usec): min=111, max=550, avg=159.87, stdev=26.20 00:08:22.149 clat percentiles (usec): 00:08:22.149 | 1.00th=[ 112], 5.00th=[ 120], 10.00th=[ 125], 20.00th=[ 133], 00:08:22.149 | 30.00th=[ 141], 40.00th=[ 147], 50.00th=[ 153], 60.00th=[ 159], 00:08:22.149 | 70.00th=[ 165], 80.00th=[ 172], 90.00th=[ 180], 95.00th=[ 186], 00:08:22.149 | 99.00th=[ 204], 99.50th=[ 281], 99.90th=[ 359], 99.95th=[ 424], 00:08:22.149 | 99.99th=[ 545] 00:08:22.149 write: IOPS=4061, BW=15.9MiB/s (16.6MB/s)(15.9MiB/1001msec); 0 zone resets 00:08:22.149 slat (usec): min=8, max=107, avg=10.14, stdev= 3.32 00:08:22.149 clat (usec): min=61, max=278, avg=93.19, stdev=13.54 00:08:22.149 lat (usec): min=75, max=386, avg=103.32, stdev=14.11 00:08:22.149 clat percentiles (usec): 00:08:22.149 | 1.00th=[ 70], 5.00th=[ 73], 10.00th=[ 75], 20.00th=[ 80], 00:08:22.149 | 30.00th=[ 85], 40.00th=[ 90], 50.00th=[ 94], 60.00th=[ 98], 00:08:22.149 | 70.00th=[ 102], 80.00th=[ 105], 90.00th=[ 111], 95.00th=[ 114], 00:08:22.149 | 99.00th=[ 121], 99.50th=[ 128], 99.90th=[ 139], 99.95th=[ 141], 00:08:22.149 | 99.99th=[ 281] 00:08:22.149 bw ( KiB/s): min=16384, max=16384, per=100.00%, avg=16384.00, stdev= 0.00, samples=1 00:08:22.149 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:08:22.149 lat (usec) : 100=35.19%, 250=64.46%, 500=0.34%, 750=0.01% 00:08:22.149 cpu : usr=1.60%, sys=5.10%, ctx=7654, majf=0, minf=5 00:08:22.149 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:22.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:22.149 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:22.149 issued rwts: total=3584,4066,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:22.149 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:22.149 00:08:22.149 Run status group 0 (all jobs): 00:08:22.149 READ: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1001-1001msec 00:08:22.149 WRITE: bw=15.9MiB/s (16.6MB/s), 15.9MiB/s-15.9MiB/s (16.6MB/s-16.6MB/s), io=15.9MiB (16.7MB), run=1001-1001msec 00:08:22.149 00:08:22.149 Disk stats (read/write): 00:08:22.149 nvme0n1: ios=3311/3584, merge=0/0, ticks=522/349, in_queue=871, util=91.18% 00:08:22.149 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:22.406 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:22.407 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:22.407 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:08:22.407 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:22.407 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:22.407 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:22.407 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:22.407 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:08:22.407 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:08:22.407 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:08:22.407 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:22.407 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:08:22.407 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:22.407 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:08:22.407 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:22.407 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:22.407 rmmod nvme_tcp 00:08:22.407 rmmod nvme_fabrics 00:08:22.407 rmmod nvme_keyring 00:08:22.407 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:22.407 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:08:22.407 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:08:22.407 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 64791 ']' 00:08:22.407 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 64791 00:08:22.407 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 64791 ']' 00:08:22.407 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 64791 00:08:22.407 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:08:22.407 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:22.407 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64791 00:08:22.666 killing process with pid 64791 00:08:22.666 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:22.666 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:22.666 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64791' 00:08:22.666 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 64791 00:08:22.666 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 64791 00:08:22.666 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:22.666 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:22.666 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:22.666 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:08:22.666 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:08:22.666 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:22.666 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:08:22.666 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:22.666 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:22.666 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:22.666 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:22.666 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:22.666 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:22.666 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:22.666 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:22.666 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:22.666 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:22.666 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:22.924 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:22.924 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:22.924 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:22.924 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:22.924 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:22.924 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.924 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:22.924 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.924 19:42:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:08:22.924 00:08:22.924 real 0m5.718s 00:08:22.924 user 0m18.524s 00:08:22.924 sys 0m1.726s 00:08:22.924 ************************************ 00:08:22.924 END TEST nvmf_nmic 00:08:22.924 ************************************ 00:08:22.924 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:22.924 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:22.924 19:42:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:22.924 19:42:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:22.924 19:42:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:22.924 19:42:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:22.924 ************************************ 00:08:22.924 START TEST nvmf_fio_target 00:08:22.924 ************************************ 00:08:22.924 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:22.924 * Looking for test storage... 00:08:22.924 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:22.924 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:22.924 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:08:22.924 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:23.183 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:23.183 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:23.183 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:23.183 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:23.183 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:08:23.183 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:08:23.183 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:08:23.183 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:08:23.183 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:08:23.183 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:08:23.183 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:08:23.183 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:23.183 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:08:23.183 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:08:23.183 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:23.183 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:23.183 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:08:23.183 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:08:23.183 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:23.183 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:08:23.183 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:08:23.183 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:08:23.183 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:08:23.183 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:23.183 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:08:23.183 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:08:23.183 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:23.183 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:23.183 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:08:23.183 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:23.183 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:23.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.183 --rc genhtml_branch_coverage=1 00:08:23.183 --rc genhtml_function_coverage=1 00:08:23.183 --rc genhtml_legend=1 00:08:23.183 --rc geninfo_all_blocks=1 00:08:23.183 --rc geninfo_unexecuted_blocks=1 00:08:23.183 00:08:23.183 ' 00:08:23.183 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:23.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.183 --rc genhtml_branch_coverage=1 00:08:23.183 --rc genhtml_function_coverage=1 00:08:23.183 --rc genhtml_legend=1 00:08:23.183 --rc geninfo_all_blocks=1 00:08:23.183 --rc geninfo_unexecuted_blocks=1 00:08:23.183 00:08:23.183 ' 00:08:23.183 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:23.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.184 --rc genhtml_branch_coverage=1 00:08:23.184 --rc genhtml_function_coverage=1 00:08:23.184 --rc genhtml_legend=1 00:08:23.184 --rc geninfo_all_blocks=1 00:08:23.184 --rc geninfo_unexecuted_blocks=1 00:08:23.184 00:08:23.184 ' 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:23.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.184 --rc genhtml_branch_coverage=1 00:08:23.184 --rc genhtml_function_coverage=1 00:08:23.184 --rc genhtml_legend=1 00:08:23.184 --rc geninfo_all_blocks=1 00:08:23.184 --rc geninfo_unexecuted_blocks=1 00:08:23.184 00:08:23.184 ' 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=91838eb1-5852-43eb-90b2-09876f360ab2 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:23.184 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:23.184 Cannot find device "nvmf_init_br" 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:23.184 Cannot find device "nvmf_init_br2" 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:23.184 Cannot find device "nvmf_tgt_br" 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:23.184 Cannot find device "nvmf_tgt_br2" 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:23.184 Cannot find device "nvmf_init_br" 00:08:23.184 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:08:23.185 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:23.185 Cannot find device "nvmf_init_br2" 00:08:23.185 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:08:23.185 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:23.185 Cannot find device "nvmf_tgt_br" 00:08:23.185 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:08:23.185 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:23.185 Cannot find device "nvmf_tgt_br2" 00:08:23.185 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:08:23.185 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:23.185 Cannot find device "nvmf_br" 00:08:23.185 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:08:23.185 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:23.185 Cannot find device "nvmf_init_if" 00:08:23.185 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:08:23.185 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:23.185 Cannot find device "nvmf_init_if2" 00:08:23.185 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:08:23.185 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:23.185 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:23.185 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:08:23.185 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:23.185 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:23.185 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:08:23.185 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:23.185 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:23.185 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:23.185 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:23.185 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:23.185 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:23.185 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:23.185 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:23.185 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:23.185 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:23.185 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:23.185 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:23.185 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:23.185 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:23.185 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:23.185 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:23.442 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:23.442 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:23.442 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:23.442 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:23.442 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:23.442 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:23.442 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:23.442 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:23.442 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:23.442 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:23.442 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:23.442 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:23.442 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:23.443 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:23.443 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:23.443 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:23.443 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:23.443 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:23.443 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:08:23.443 00:08:23.443 --- 10.0.0.3 ping statistics --- 00:08:23.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.443 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:08:23.443 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:23.443 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:23.443 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.073 ms 00:08:23.443 00:08:23.443 --- 10.0.0.4 ping statistics --- 00:08:23.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.443 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:08:23.443 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:23.443 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:23.443 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:08:23.443 00:08:23.443 --- 10.0.0.1 ping statistics --- 00:08:23.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.443 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:08:23.443 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:23.443 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:23.443 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:08:23.443 00:08:23.443 --- 10.0.0.2 ping statistics --- 00:08:23.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.443 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:08:23.443 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:23.443 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:08:23.443 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:23.443 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:23.443 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:23.443 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:23.443 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:23.443 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:23.443 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:23.443 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:08:23.443 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:23.443 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:23.443 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:23.443 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=65116 00:08:23.443 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 65116 00:08:23.443 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 65116 ']' 00:08:23.443 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:23.443 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.443 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:23.443 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.443 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:23.443 19:42:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:23.443 [2024-11-26 19:42:18.560256] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:08:23.443 [2024-11-26 19:42:18.560306] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:23.701 [2024-11-26 19:42:18.691801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:23.701 [2024-11-26 19:42:18.728467] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:23.701 [2024-11-26 19:42:18.728512] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:23.701 [2024-11-26 19:42:18.728519] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:23.701 [2024-11-26 19:42:18.728524] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:23.701 [2024-11-26 19:42:18.728528] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:23.701 [2024-11-26 19:42:18.729342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.701 [2024-11-26 19:42:18.729566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:23.701 [2024-11-26 19:42:18.729597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:23.701 [2024-11-26 19:42:18.729603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.701 [2024-11-26 19:42:18.762567] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:24.265 19:42:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:24.265 19:42:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:08:24.265 19:42:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:24.265 19:42:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:24.265 19:42:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:24.265 19:42:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:24.265 19:42:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:24.521 [2024-11-26 19:42:19.605907] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:24.521 19:42:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:24.778 19:42:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:08:24.778 19:42:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:25.036 19:42:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:08:25.036 19:42:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:25.036 19:42:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:08:25.036 19:42:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:25.294 19:42:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:08:25.294 19:42:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:08:25.552 19:42:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:25.809 19:42:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:08:25.809 19:42:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:26.065 19:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:08:26.065 19:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:26.322 19:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:08:26.322 19:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:08:26.322 19:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:26.617 19:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:26.617 19:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:26.875 19:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:26.875 19:42:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:27.132 19:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:27.390 [2024-11-26 19:42:22.386845] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:27.390 19:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:08:27.390 19:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:08:27.647 19:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid=91838eb1-5852-43eb-90b2-09876f360ab2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:08:27.905 19:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:08:27.905 19:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:08:27.905 19:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:27.905 19:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:08:27.905 19:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:08:27.905 19:42:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:08:29.853 19:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:29.853 19:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:29.853 19:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:29.853 19:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:08:29.853 19:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:29.853 19:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:08:29.853 19:42:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:29.853 [global] 00:08:29.853 thread=1 00:08:29.853 invalidate=1 00:08:29.853 rw=write 00:08:29.853 time_based=1 00:08:29.853 runtime=1 00:08:29.853 ioengine=libaio 00:08:29.853 direct=1 00:08:29.853 bs=4096 00:08:29.853 iodepth=1 00:08:29.853 norandommap=0 00:08:29.853 numjobs=1 00:08:29.853 00:08:29.853 verify_dump=1 00:08:29.853 verify_backlog=512 00:08:29.853 verify_state_save=0 00:08:29.853 do_verify=1 00:08:29.853 verify=crc32c-intel 00:08:29.853 [job0] 00:08:29.853 filename=/dev/nvme0n1 00:08:29.853 [job1] 00:08:29.853 filename=/dev/nvme0n2 00:08:29.853 [job2] 00:08:29.853 filename=/dev/nvme0n3 00:08:29.853 [job3] 00:08:29.853 filename=/dev/nvme0n4 00:08:29.853 Could not set queue depth (nvme0n1) 00:08:29.853 Could not set queue depth (nvme0n2) 00:08:29.853 Could not set queue depth (nvme0n3) 00:08:29.853 Could not set queue depth (nvme0n4) 00:08:30.111 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:30.111 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:30.111 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:30.111 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:30.111 fio-3.35 00:08:30.111 Starting 4 threads 00:08:31.043 00:08:31.043 job0: (groupid=0, jobs=1): err= 0: pid=65291: Tue Nov 26 19:42:26 2024 00:08:31.043 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:08:31.043 slat (nsec): min=5480, max=34821, avg=6597.82, stdev=2189.15 00:08:31.043 clat (usec): min=98, max=837, avg=215.02, stdev=41.59 00:08:31.043 lat (usec): min=104, max=842, avg=221.62, stdev=42.04 00:08:31.043 clat percentiles (usec): 00:08:31.043 | 1.00th=[ 133], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 192], 00:08:31.043 | 30.00th=[ 196], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 208], 00:08:31.043 | 70.00th=[ 217], 80.00th=[ 249], 90.00th=[ 265], 95.00th=[ 277], 00:08:31.043 | 99.00th=[ 359], 99.50th=[ 375], 99.90th=[ 668], 99.95th=[ 807], 00:08:31.043 | 99.99th=[ 840] 00:08:31.043 write: IOPS=2939, BW=11.5MiB/s (12.0MB/s)(11.5MiB/1001msec); 0 zone resets 00:08:31.043 slat (nsec): min=8040, max=88365, avg=10656.52, stdev=4895.01 00:08:31.043 clat (usec): min=67, max=912, avg=134.73, stdev=41.79 00:08:31.043 lat (usec): min=76, max=945, avg=145.39, stdev=43.57 00:08:31.043 clat percentiles (usec): 00:08:31.043 | 1.00th=[ 74], 5.00th=[ 80], 10.00th=[ 84], 20.00th=[ 91], 00:08:31.043 | 30.00th=[ 104], 40.00th=[ 141], 50.00th=[ 145], 60.00th=[ 147], 00:08:31.043 | 70.00th=[ 151], 80.00th=[ 157], 90.00th=[ 165], 95.00th=[ 186], 00:08:31.043 | 99.00th=[ 255], 99.50th=[ 289], 99.90th=[ 469], 99.95th=[ 486], 00:08:31.043 | 99.99th=[ 914] 00:08:31.043 bw ( KiB/s): min=12288, max=12288, per=24.25%, avg=12288.00, stdev= 0.00, samples=1 00:08:31.043 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:08:31.043 lat (usec) : 100=14.96%, 250=75.37%, 500=9.58%, 750=0.04%, 1000=0.05% 00:08:31.043 cpu : usr=0.80%, sys=4.20%, ctx=5502, majf=0, minf=11 00:08:31.043 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:31.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:31.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:31.043 issued rwts: total=2560,2942,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:31.043 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:31.043 job1: (groupid=0, jobs=1): err= 0: pid=65295: Tue Nov 26 19:42:26 2024 00:08:31.043 read: IOPS=3369, BW=13.2MiB/s (13.8MB/s)(13.2MiB/1001msec) 00:08:31.043 slat (nsec): min=5374, max=58257, avg=6896.89, stdev=2515.59 00:08:31.043 clat (usec): min=76, max=1259, avg=164.07, stdev=36.36 00:08:31.043 lat (usec): min=82, max=1265, avg=170.97, stdev=36.35 00:08:31.043 clat percentiles (usec): 00:08:31.043 | 1.00th=[ 106], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 145], 00:08:31.043 | 30.00th=[ 149], 40.00th=[ 151], 50.00th=[ 155], 60.00th=[ 159], 00:08:31.043 | 70.00th=[ 163], 80.00th=[ 178], 90.00th=[ 206], 95.00th=[ 219], 00:08:31.043 | 99.00th=[ 281], 99.50th=[ 289], 99.90th=[ 396], 99.95th=[ 441], 00:08:31.043 | 99.99th=[ 1254] 00:08:31.043 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:08:31.043 slat (nsec): min=8877, max=67125, avg=10981.79, stdev=3146.21 00:08:31.043 clat (usec): min=50, max=382, avg=105.23, stdev=22.03 00:08:31.043 lat (usec): min=62, max=397, avg=116.21, stdev=22.03 00:08:31.043 clat percentiles (usec): 00:08:31.043 | 1.00th=[ 59], 5.00th=[ 65], 10.00th=[ 70], 20.00th=[ 84], 00:08:31.043 | 30.00th=[ 103], 40.00th=[ 108], 50.00th=[ 111], 60.00th=[ 113], 00:08:31.043 | 70.00th=[ 116], 80.00th=[ 120], 90.00th=[ 126], 95.00th=[ 133], 00:08:31.043 | 99.00th=[ 147], 99.50th=[ 155], 99.90th=[ 277], 99.95th=[ 330], 00:08:31.043 | 99.99th=[ 383] 00:08:31.043 bw ( KiB/s): min=16384, max=16384, per=32.34%, avg=16384.00, stdev= 0.00, samples=1 00:08:31.043 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:08:31.043 lat (usec) : 100=13.24%, 250=85.24%, 500=1.51% 00:08:31.043 lat (msec) : 2=0.01% 00:08:31.043 cpu : usr=1.40%, sys=5.10%, ctx=6957, majf=0, minf=11 00:08:31.043 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:31.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:31.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:31.043 issued rwts: total=3373,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:31.043 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:31.043 job2: (groupid=0, jobs=1): err= 0: pid=65297: Tue Nov 26 19:42:26 2024 00:08:31.043 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:08:31.043 slat (usec): min=5, max=225, avg= 8.14, stdev= 5.65 00:08:31.043 clat (usec): min=113, max=1480, avg=215.45, stdev=47.12 00:08:31.043 lat (usec): min=120, max=1487, avg=223.59, stdev=47.37 00:08:31.043 clat percentiles (usec): 00:08:31.043 | 1.00th=[ 163], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 190], 00:08:31.043 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 208], 00:08:31.043 | 70.00th=[ 217], 80.00th=[ 245], 90.00th=[ 262], 95.00th=[ 277], 00:08:31.043 | 99.00th=[ 351], 99.50th=[ 388], 99.90th=[ 627], 99.95th=[ 848], 00:08:31.043 | 99.99th=[ 1483] 00:08:31.043 write: IOPS=2567, BW=10.0MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:08:31.043 slat (usec): min=9, max=424, avg=13.75, stdev=12.57 00:08:31.043 clat (usec): min=75, max=389, avg=150.40, stdev=43.17 00:08:31.043 lat (usec): min=87, max=528, avg=164.16, stdev=47.22 00:08:31.043 clat percentiles (usec): 00:08:31.043 | 1.00th=[ 88], 5.00th=[ 94], 10.00th=[ 99], 20.00th=[ 111], 00:08:31.043 | 30.00th=[ 137], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 149], 00:08:31.043 | 70.00th=[ 155], 80.00th=[ 163], 90.00th=[ 225], 95.00th=[ 239], 00:08:31.043 | 99.00th=[ 273], 99.50th=[ 289], 99.90th=[ 355], 99.95th=[ 388], 00:08:31.043 | 99.99th=[ 392] 00:08:31.043 bw ( KiB/s): min=12288, max=12288, per=24.25%, avg=12288.00, stdev= 0.00, samples=1 00:08:31.043 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:08:31.043 lat (usec) : 100=5.52%, 250=84.85%, 500=9.53%, 750=0.06%, 1000=0.02% 00:08:31.043 lat (msec) : 2=0.02% 00:08:31.043 cpu : usr=1.10%, sys=4.60%, ctx=5130, majf=0, minf=15 00:08:31.043 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:31.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:31.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:31.043 issued rwts: total=2560,2570,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:31.043 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:31.043 job3: (groupid=0, jobs=1): err= 0: pid=65298: Tue Nov 26 19:42:26 2024 00:08:31.044 read: IOPS=3273, BW=12.8MiB/s (13.4MB/s)(12.8MiB/1001msec) 00:08:31.044 slat (nsec): min=5440, max=68690, avg=6721.46, stdev=2800.24 00:08:31.044 clat (usec): min=82, max=1548, avg=162.52, stdev=36.15 00:08:31.044 lat (usec): min=88, max=1560, avg=169.25, stdev=36.54 00:08:31.044 clat percentiles (usec): 00:08:31.044 | 1.00th=[ 127], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 147], 00:08:31.044 | 30.00th=[ 149], 40.00th=[ 151], 50.00th=[ 155], 60.00th=[ 159], 00:08:31.044 | 70.00th=[ 163], 80.00th=[ 176], 90.00th=[ 198], 95.00th=[ 210], 00:08:31.044 | 99.00th=[ 253], 99.50th=[ 273], 99.90th=[ 412], 99.95th=[ 537], 00:08:31.044 | 99.99th=[ 1549] 00:08:31.044 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:08:31.044 slat (usec): min=8, max=288, avg=11.79, stdev= 9.20 00:08:31.044 clat (usec): min=62, max=273, avg=110.72, stdev=23.20 00:08:31.044 lat (usec): min=72, max=404, avg=122.50, stdev=27.58 00:08:31.044 clat percentiles (usec): 00:08:31.044 | 1.00th=[ 68], 5.00th=[ 74], 10.00th=[ 78], 20.00th=[ 97], 00:08:31.044 | 30.00th=[ 106], 40.00th=[ 110], 50.00th=[ 112], 60.00th=[ 115], 00:08:31.044 | 70.00th=[ 118], 80.00th=[ 122], 90.00th=[ 130], 95.00th=[ 161], 00:08:31.044 | 99.00th=[ 182], 99.50th=[ 192], 99.90th=[ 239], 99.95th=[ 251], 00:08:31.044 | 99.99th=[ 273] 00:08:31.044 bw ( KiB/s): min=14768, max=14768, per=29.15%, avg=14768.00, stdev= 0.00, samples=1 00:08:31.044 iops : min= 3692, max= 3692, avg=3692.00, stdev= 0.00, samples=1 00:08:31.044 lat (usec) : 100=11.15%, 250=88.27%, 500=0.55%, 750=0.01% 00:08:31.044 lat (msec) : 2=0.01% 00:08:31.044 cpu : usr=1.40%, sys=5.20%, ctx=6861, majf=0, minf=9 00:08:31.044 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:31.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:31.044 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:31.044 issued rwts: total=3277,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:31.044 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:31.044 00:08:31.044 Run status group 0 (all jobs): 00:08:31.044 READ: bw=45.9MiB/s (48.2MB/s), 9.99MiB/s-13.2MiB/s (10.5MB/s-13.8MB/s), io=46.0MiB (48.2MB), run=1001-1001msec 00:08:31.044 WRITE: bw=49.5MiB/s (51.9MB/s), 10.0MiB/s-14.0MiB/s (10.5MB/s-14.7MB/s), io=49.5MiB (51.9MB), run=1001-1001msec 00:08:31.044 00:08:31.044 Disk stats (read/write): 00:08:31.044 nvme0n1: ios=2291/2560, merge=0/0, ticks=512/360, in_queue=872, util=89.48% 00:08:31.044 nvme0n2: ios=3093/3072, merge=0/0, ticks=508/337, in_queue=845, util=89.63% 00:08:31.044 nvme0n3: ios=2084/2560, merge=0/0, ticks=463/394, in_queue=857, util=90.04% 00:08:31.044 nvme0n4: ios=2945/3072, merge=0/0, ticks=478/356, in_queue=834, util=89.91% 00:08:31.044 19:42:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:08:31.044 [global] 00:08:31.044 thread=1 00:08:31.044 invalidate=1 00:08:31.044 rw=randwrite 00:08:31.044 time_based=1 00:08:31.044 runtime=1 00:08:31.044 ioengine=libaio 00:08:31.044 direct=1 00:08:31.044 bs=4096 00:08:31.044 iodepth=1 00:08:31.044 norandommap=0 00:08:31.044 numjobs=1 00:08:31.044 00:08:31.302 verify_dump=1 00:08:31.302 verify_backlog=512 00:08:31.302 verify_state_save=0 00:08:31.302 do_verify=1 00:08:31.302 verify=crc32c-intel 00:08:31.302 [job0] 00:08:31.302 filename=/dev/nvme0n1 00:08:31.302 [job1] 00:08:31.302 filename=/dev/nvme0n2 00:08:31.302 [job2] 00:08:31.302 filename=/dev/nvme0n3 00:08:31.302 [job3] 00:08:31.302 filename=/dev/nvme0n4 00:08:31.302 Could not set queue depth (nvme0n1) 00:08:31.302 Could not set queue depth (nvme0n2) 00:08:31.302 Could not set queue depth (nvme0n3) 00:08:31.302 Could not set queue depth (nvme0n4) 00:08:31.302 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:31.302 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:31.302 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:31.302 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:31.302 fio-3.35 00:08:31.302 Starting 4 threads 00:08:32.685 00:08:32.685 job0: (groupid=0, jobs=1): err= 0: pid=65351: Tue Nov 26 19:42:27 2024 00:08:32.685 read: IOPS=4853, BW=19.0MiB/s (19.9MB/s)(19.0MiB/1001msec) 00:08:32.685 slat (usec): min=4, max=114, avg= 7.55, stdev= 4.79 00:08:32.685 clat (usec): min=68, max=4020, avg=102.89, stdev=96.90 00:08:32.685 lat (usec): min=78, max=4026, avg=110.43, stdev=97.27 00:08:32.685 clat percentiles (usec): 00:08:32.685 | 1.00th=[ 80], 5.00th=[ 84], 10.00th=[ 86], 20.00th=[ 90], 00:08:32.685 | 30.00th=[ 92], 40.00th=[ 95], 50.00th=[ 97], 60.00th=[ 100], 00:08:32.685 | 70.00th=[ 103], 80.00th=[ 109], 90.00th=[ 116], 95.00th=[ 124], 00:08:32.685 | 99.00th=[ 149], 99.50th=[ 167], 99.90th=[ 2278], 99.95th=[ 2966], 00:08:32.685 | 99.99th=[ 4015] 00:08:32.685 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:08:32.685 slat (usec): min=6, max=121, avg=12.34, stdev= 7.92 00:08:32.685 clat (usec): min=53, max=309, avg=76.10, stdev=15.09 00:08:32.685 lat (usec): min=63, max=320, avg=88.44, stdev=18.22 00:08:32.685 clat percentiles (usec): 00:08:32.685 | 1.00th=[ 58], 5.00th=[ 62], 10.00th=[ 64], 20.00th=[ 68], 00:08:32.685 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 76], 00:08:32.685 | 70.00th=[ 80], 80.00th=[ 84], 90.00th=[ 90], 95.00th=[ 99], 00:08:32.685 | 99.00th=[ 117], 99.50th=[ 130], 99.90th=[ 269], 99.95th=[ 297], 00:08:32.685 | 99.99th=[ 310] 00:08:32.685 bw ( KiB/s): min=20480, max=20480, per=33.02%, avg=20480.00, stdev= 0.00, samples=1 00:08:32.685 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:08:32.685 lat (usec) : 100=78.78%, 250=20.97%, 500=0.18%, 750=0.01% 00:08:32.685 lat (msec) : 2=0.01%, 4=0.04%, 10=0.01% 00:08:32.685 cpu : usr=2.20%, sys=8.50%, ctx=9996, majf=0, minf=13 00:08:32.685 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:32.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:32.685 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:32.685 issued rwts: total=4858,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:32.685 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:32.685 job1: (groupid=0, jobs=1): err= 0: pid=65352: Tue Nov 26 19:42:27 2024 00:08:32.686 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:08:32.686 slat (nsec): min=5282, max=57485, avg=7378.12, stdev=3321.34 00:08:32.686 clat (usec): min=98, max=541, avg=210.85, stdev=33.46 00:08:32.686 lat (usec): min=104, max=546, avg=218.23, stdev=33.72 00:08:32.686 clat percentiles (usec): 00:08:32.686 | 1.00th=[ 172], 5.00th=[ 184], 10.00th=[ 190], 20.00th=[ 194], 00:08:32.686 | 30.00th=[ 198], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 208], 00:08:32.686 | 70.00th=[ 212], 80.00th=[ 219], 90.00th=[ 241], 95.00th=[ 269], 00:08:32.686 | 99.00th=[ 379], 99.50th=[ 396], 99.90th=[ 453], 99.95th=[ 465], 00:08:32.686 | 99.99th=[ 545] 00:08:32.686 write: IOPS=2664, BW=10.4MiB/s (10.9MB/s)(10.4MiB/1001msec); 0 zone resets 00:08:32.686 slat (usec): min=8, max=104, avg=11.57, stdev= 5.25 00:08:32.686 clat (usec): min=39, max=1141, avg=151.92, stdev=33.61 00:08:32.686 lat (usec): min=77, max=1163, avg=163.49, stdev=33.90 00:08:32.686 clat percentiles (usec): 00:08:32.686 | 1.00th=[ 80], 5.00th=[ 88], 10.00th=[ 105], 20.00th=[ 145], 00:08:32.686 | 30.00th=[ 149], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 159], 00:08:32.686 | 70.00th=[ 161], 80.00th=[ 167], 90.00th=[ 174], 95.00th=[ 182], 00:08:32.686 | 99.00th=[ 204], 99.50th=[ 235], 99.90th=[ 408], 99.95th=[ 474], 00:08:32.686 | 99.99th=[ 1139] 00:08:32.686 bw ( KiB/s): min=12288, max=12288, per=19.81%, avg=12288.00, stdev= 0.00, samples=1 00:08:32.686 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:08:32.686 lat (usec) : 50=0.02%, 100=4.74%, 250=90.76%, 500=4.44%, 750=0.02% 00:08:32.686 lat (msec) : 2=0.02% 00:08:32.686 cpu : usr=1.00%, sys=4.30%, ctx=5230, majf=0, minf=13 00:08:32.686 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:32.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:32.686 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:32.686 issued rwts: total=2560,2667,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:32.686 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:32.686 job2: (groupid=0, jobs=1): err= 0: pid=65353: Tue Nov 26 19:42:27 2024 00:08:32.686 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:08:32.686 slat (nsec): min=5561, max=26864, avg=6699.21, stdev=1746.55 00:08:32.686 clat (usec): min=115, max=590, avg=209.37, stdev=29.29 00:08:32.686 lat (usec): min=122, max=596, avg=216.06, stdev=29.36 00:08:32.686 clat percentiles (usec): 00:08:32.686 | 1.00th=[ 159], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 194], 00:08:32.686 | 30.00th=[ 198], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 208], 00:08:32.686 | 70.00th=[ 212], 80.00th=[ 219], 90.00th=[ 235], 95.00th=[ 260], 00:08:32.686 | 99.00th=[ 302], 99.50th=[ 367], 99.90th=[ 498], 99.95th=[ 562], 00:08:32.686 | 99.99th=[ 594] 00:08:32.686 write: IOPS=2611, BW=10.2MiB/s (10.7MB/s)(10.2MiB/1001msec); 0 zone resets 00:08:32.686 slat (usec): min=9, max=162, avg=11.49, stdev= 7.46 00:08:32.686 clat (usec): min=76, max=1230, avg=157.59, stdev=35.00 00:08:32.686 lat (usec): min=85, max=1240, avg=169.08, stdev=37.72 00:08:32.686 clat percentiles (usec): 00:08:32.686 | 1.00th=[ 93], 5.00th=[ 104], 10.00th=[ 137], 20.00th=[ 147], 00:08:32.686 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 159], 00:08:32.686 | 70.00th=[ 163], 80.00th=[ 169], 90.00th=[ 178], 95.00th=[ 198], 00:08:32.686 | 99.00th=[ 262], 99.50th=[ 289], 99.90th=[ 383], 99.95th=[ 429], 00:08:32.686 | 99.99th=[ 1237] 00:08:32.686 bw ( KiB/s): min=12288, max=12288, per=19.81%, avg=12288.00, stdev= 0.00, samples=1 00:08:32.686 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:08:32.686 lat (usec) : 100=1.70%, 250=94.20%, 500=4.04%, 750=0.04% 00:08:32.686 lat (msec) : 2=0.02% 00:08:32.686 cpu : usr=1.30%, sys=3.60%, ctx=5175, majf=0, minf=11 00:08:32.686 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:32.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:32.686 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:32.686 issued rwts: total=2560,2614,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:32.686 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:32.686 job3: (groupid=0, jobs=1): err= 0: pid=65354: Tue Nov 26 19:42:27 2024 00:08:32.686 read: IOPS=4876, BW=19.0MiB/s (20.0MB/s)(19.1MiB/1001msec) 00:08:32.686 slat (nsec): min=5151, max=44236, avg=6433.62, stdev=2753.03 00:08:32.686 clat (usec): min=78, max=392, avg=102.64, stdev=14.84 00:08:32.686 lat (usec): min=83, max=399, avg=109.07, stdev=15.33 00:08:32.686 clat percentiles (usec): 00:08:32.686 | 1.00th=[ 85], 5.00th=[ 88], 10.00th=[ 90], 20.00th=[ 93], 00:08:32.686 | 30.00th=[ 96], 40.00th=[ 98], 50.00th=[ 100], 60.00th=[ 103], 00:08:32.686 | 70.00th=[ 106], 80.00th=[ 111], 90.00th=[ 118], 95.00th=[ 125], 00:08:32.686 | 99.00th=[ 145], 99.50th=[ 161], 99.90th=[ 285], 99.95th=[ 310], 00:08:32.686 | 99.99th=[ 392] 00:08:32.686 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:08:32.686 slat (nsec): min=6964, max=94091, avg=10550.84, stdev=4899.25 00:08:32.686 clat (usec): min=55, max=305, avg=79.08, stdev=12.65 00:08:32.686 lat (usec): min=65, max=314, avg=89.63, stdev=14.28 00:08:32.686 clat percentiles (usec): 00:08:32.686 | 1.00th=[ 63], 5.00th=[ 66], 10.00th=[ 69], 20.00th=[ 71], 00:08:32.686 | 30.00th=[ 73], 40.00th=[ 75], 50.00th=[ 77], 60.00th=[ 80], 00:08:32.686 | 70.00th=[ 83], 80.00th=[ 86], 90.00th=[ 93], 95.00th=[ 99], 00:08:32.686 | 99.00th=[ 118], 99.50th=[ 126], 99.90th=[ 210], 99.95th=[ 245], 00:08:32.686 | 99.99th=[ 306] 00:08:32.686 bw ( KiB/s): min=20480, max=20480, per=33.02%, avg=20480.00, stdev= 0.00, samples=1 00:08:32.686 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:08:32.686 lat (usec) : 100=72.88%, 250=27.04%, 500=0.08% 00:08:32.686 cpu : usr=1.70%, sys=7.40%, ctx=10001, majf=0, minf=11 00:08:32.686 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:32.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:32.686 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:32.686 issued rwts: total=4881,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:32.686 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:32.686 00:08:32.686 Run status group 0 (all jobs): 00:08:32.686 READ: bw=58.0MiB/s (60.8MB/s), 9.99MiB/s-19.0MiB/s (10.5MB/s-20.0MB/s), io=58.0MiB (60.9MB), run=1001-1001msec 00:08:32.686 WRITE: bw=60.6MiB/s (63.5MB/s), 10.2MiB/s-20.0MiB/s (10.7MB/s-20.9MB/s), io=60.6MiB (63.6MB), run=1001-1001msec 00:08:32.686 00:08:32.686 Disk stats (read/write): 00:08:32.686 nvme0n1: ios=4266/4608, merge=0/0, ticks=450/361, in_queue=811, util=89.38% 00:08:32.686 nvme0n2: ios=2152/2560, merge=0/0, ticks=467/406, in_queue=873, util=90.04% 00:08:32.686 nvme0n3: ios=2084/2560, merge=0/0, ticks=431/416, in_queue=847, util=89.96% 00:08:32.686 nvme0n4: ios=4280/4608, merge=0/0, ticks=449/375, in_queue=824, util=90.03% 00:08:32.686 19:42:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:08:32.686 [global] 00:08:32.686 thread=1 00:08:32.686 invalidate=1 00:08:32.686 rw=write 00:08:32.686 time_based=1 00:08:32.686 runtime=1 00:08:32.686 ioengine=libaio 00:08:32.686 direct=1 00:08:32.686 bs=4096 00:08:32.686 iodepth=128 00:08:32.686 norandommap=0 00:08:32.686 numjobs=1 00:08:32.686 00:08:32.686 verify_dump=1 00:08:32.686 verify_backlog=512 00:08:32.686 verify_state_save=0 00:08:32.686 do_verify=1 00:08:32.686 verify=crc32c-intel 00:08:32.686 [job0] 00:08:32.686 filename=/dev/nvme0n1 00:08:32.686 [job1] 00:08:32.686 filename=/dev/nvme0n2 00:08:32.686 [job2] 00:08:32.686 filename=/dev/nvme0n3 00:08:32.686 [job3] 00:08:32.686 filename=/dev/nvme0n4 00:08:32.686 Could not set queue depth (nvme0n1) 00:08:32.686 Could not set queue depth (nvme0n2) 00:08:32.686 Could not set queue depth (nvme0n3) 00:08:32.686 Could not set queue depth (nvme0n4) 00:08:32.686 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:32.686 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:32.686 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:32.686 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:32.686 fio-3.35 00:08:32.686 Starting 4 threads 00:08:34.083 00:08:34.083 job0: (groupid=0, jobs=1): err= 0: pid=65407: Tue Nov 26 19:42:28 2024 00:08:34.083 read: IOPS=7405, BW=28.9MiB/s (30.3MB/s)(29.0MiB/1002msec) 00:08:34.083 slat (usec): min=3, max=2649, avg=65.80, stdev=260.06 00:08:34.083 clat (usec): min=495, max=11077, avg=8333.81, stdev=904.14 00:08:34.083 lat (usec): min=1597, max=11091, avg=8399.61, stdev=926.28 00:08:34.083 clat percentiles (usec): 00:08:34.083 | 1.00th=[ 6128], 5.00th=[ 6849], 10.00th=[ 7308], 20.00th=[ 8029], 00:08:34.083 | 30.00th=[ 8160], 40.00th=[ 8291], 50.00th=[ 8356], 60.00th=[ 8455], 00:08:34.083 | 70.00th=[ 8586], 80.00th=[ 8717], 90.00th=[ 9503], 95.00th=[ 9765], 00:08:34.083 | 99.00th=[10421], 99.50th=[10552], 99.90th=[10814], 99.95th=[10945], 00:08:34.083 | 99.99th=[11076] 00:08:34.083 write: IOPS=7664, BW=29.9MiB/s (31.4MB/s)(30.0MiB/1002msec); 0 zone resets 00:08:34.083 slat (usec): min=6, max=2482, avg=62.27, stdev=223.96 00:08:34.083 clat (usec): min=6092, max=10969, avg=8461.05, stdev=703.85 00:08:34.083 lat (usec): min=6110, max=10983, avg=8523.31, stdev=722.97 00:08:34.083 clat percentiles (usec): 00:08:34.083 | 1.00th=[ 6652], 5.00th=[ 7504], 10.00th=[ 7832], 20.00th=[ 8029], 00:08:34.083 | 30.00th=[ 8160], 40.00th=[ 8291], 50.00th=[ 8455], 60.00th=[ 8455], 00:08:34.083 | 70.00th=[ 8586], 80.00th=[ 8717], 90.00th=[ 9503], 95.00th=[10028], 00:08:34.083 | 99.00th=[10552], 99.50th=[10683], 99.90th=[10814], 99.95th=[10945], 00:08:34.083 | 99.99th=[10945] 00:08:34.083 bw ( KiB/s): min=30256, max=31184, per=28.26%, avg=30720.00, stdev=656.20, samples=2 00:08:34.083 iops : min= 7564, max= 7796, avg=7680.00, stdev=164.05, samples=2 00:08:34.084 lat (usec) : 500=0.01% 00:08:34.084 lat (msec) : 2=0.13%, 4=0.15%, 10=95.34%, 20=4.38% 00:08:34.084 cpu : usr=3.40%, sys=12.89%, ctx=997, majf=0, minf=1 00:08:34.084 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:08:34.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:34.084 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:34.084 issued rwts: total=7420,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:34.084 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:34.084 job1: (groupid=0, jobs=1): err= 0: pid=65408: Tue Nov 26 19:42:28 2024 00:08:34.084 read: IOPS=7516, BW=29.4MiB/s (30.8MB/s)(29.5MiB/1004msec) 00:08:34.084 slat (usec): min=3, max=4812, avg=62.54, stdev=401.62 00:08:34.084 clat (usec): min=902, max=14069, avg=8684.03, stdev=1039.93 00:08:34.084 lat (usec): min=3686, max=16761, avg=8746.57, stdev=1055.02 00:08:34.084 clat percentiles (usec): 00:08:34.084 | 1.00th=[ 5145], 5.00th=[ 7373], 10.00th=[ 8094], 20.00th=[ 8356], 00:08:34.084 | 30.00th=[ 8455], 40.00th=[ 8586], 50.00th=[ 8717], 60.00th=[ 8848], 00:08:34.084 | 70.00th=[ 8979], 80.00th=[ 9110], 90.00th=[ 9241], 95.00th=[ 9372], 00:08:34.084 | 99.00th=[13304], 99.50th=[13829], 99.90th=[13960], 99.95th=[14091], 00:08:34.084 | 99.99th=[14091] 00:08:34.084 write: IOPS=7649, BW=29.9MiB/s (31.3MB/s)(30.0MiB/1004msec); 0 zone resets 00:08:34.084 slat (usec): min=5, max=5028, avg=64.63, stdev=389.40 00:08:34.084 clat (usec): min=3712, max=11283, avg=8027.72, stdev=730.77 00:08:34.084 lat (usec): min=5303, max=11493, avg=8092.35, stdev=644.34 00:08:34.084 clat percentiles (usec): 00:08:34.084 | 1.00th=[ 5211], 5.00th=[ 7111], 10.00th=[ 7308], 20.00th=[ 7570], 00:08:34.084 | 30.00th=[ 7767], 40.00th=[ 7963], 50.00th=[ 8094], 60.00th=[ 8160], 00:08:34.084 | 70.00th=[ 8356], 80.00th=[ 8455], 90.00th=[ 8586], 95.00th=[ 8979], 00:08:34.084 | 99.00th=[10421], 99.50th=[10814], 99.90th=[11076], 99.95th=[11207], 00:08:34.084 | 99.99th=[11338] 00:08:34.084 bw ( KiB/s): min=29704, max=31736, per=28.26%, avg=30720.00, stdev=1436.84, samples=2 00:08:34.084 iops : min= 7426, max= 7934, avg=7680.00, stdev=359.21, samples=2 00:08:34.084 lat (usec) : 1000=0.01% 00:08:34.084 lat (msec) : 4=0.24%, 10=97.73%, 20=2.02% 00:08:34.084 cpu : usr=3.49%, sys=12.46%, ctx=324, majf=0, minf=9 00:08:34.084 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:08:34.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:34.084 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:34.084 issued rwts: total=7547,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:34.084 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:34.084 job2: (groupid=0, jobs=1): err= 0: pid=65409: Tue Nov 26 19:42:28 2024 00:08:34.084 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:08:34.084 slat (usec): min=2, max=9700, avg=91.77, stdev=624.17 00:08:34.084 clat (usec): min=6938, max=21001, avg=12542.07, stdev=1533.23 00:08:34.084 lat (usec): min=6952, max=23774, avg=12633.84, stdev=1556.46 00:08:34.084 clat percentiles (usec): 00:08:34.084 | 1.00th=[ 7439], 5.00th=[11207], 10.00th=[11863], 20.00th=[12125], 00:08:34.084 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12387], 60.00th=[12518], 00:08:34.084 | 70.00th=[12780], 80.00th=[12911], 90.00th=[13435], 95.00th=[14222], 00:08:34.084 | 99.00th=[18744], 99.50th=[19006], 99.90th=[19530], 99.95th=[19792], 00:08:34.084 | 99.99th=[21103] 00:08:34.084 write: IOPS=5555, BW=21.7MiB/s (22.8MB/s)(21.7MiB/1002msec); 0 zone resets 00:08:34.084 slat (usec): min=2, max=9691, avg=91.21, stdev=594.41 00:08:34.084 clat (usec): min=542, max=17255, avg=11292.56, stdev=1310.13 00:08:34.084 lat (usec): min=5376, max=17273, avg=11383.77, stdev=1202.74 00:08:34.084 clat percentiles (usec): 00:08:34.084 | 1.00th=[ 6128], 5.00th=[ 9896], 10.00th=[10290], 20.00th=[10683], 00:08:34.084 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11469], 60.00th=[11600], 00:08:34.084 | 70.00th=[11731], 80.00th=[11863], 90.00th=[12256], 95.00th=[12518], 00:08:34.084 | 99.00th=[16450], 99.50th=[16909], 99.90th=[17171], 99.95th=[17171], 00:08:34.084 | 99.99th=[17171] 00:08:34.084 bw ( KiB/s): min=21048, max=22472, per=20.02%, avg=21760.00, stdev=1006.92, samples=2 00:08:34.084 iops : min= 5262, max= 5618, avg=5440.00, stdev=251.73, samples=2 00:08:34.084 lat (usec) : 750=0.01% 00:08:34.084 lat (msec) : 10=4.61%, 20=95.37%, 50=0.01% 00:08:34.084 cpu : usr=2.80%, sys=8.39%, ctx=227, majf=0, minf=13 00:08:34.084 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:08:34.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:34.084 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:34.084 issued rwts: total=5120,5567,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:34.084 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:34.084 job3: (groupid=0, jobs=1): err= 0: pid=65410: Tue Nov 26 19:42:28 2024 00:08:34.084 read: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec) 00:08:34.084 slat (usec): min=3, max=3988, avg=80.45, stdev=330.65 00:08:34.084 clat (usec): min=7213, max=14378, avg=10129.85, stdev=1022.57 00:08:34.084 lat (usec): min=7355, max=14403, avg=10210.30, stdev=1053.18 00:08:34.084 clat percentiles (usec): 00:08:34.084 | 1.00th=[ 7701], 5.00th=[ 8225], 10.00th=[ 8717], 20.00th=[ 9634], 00:08:34.084 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10028], 60.00th=[10159], 00:08:34.084 | 70.00th=[10421], 80.00th=[10683], 90.00th=[11600], 95.00th=[12125], 00:08:34.084 | 99.00th=[12780], 99.50th=[12911], 99.90th=[13173], 99.95th=[13698], 00:08:34.084 | 99.99th=[14353] 00:08:34.084 write: IOPS=6334, BW=24.7MiB/s (25.9MB/s)(24.8MiB/1003msec); 0 zone resets 00:08:34.084 slat (usec): min=5, max=7331, avg=75.37, stdev=299.70 00:08:34.084 clat (usec): min=2296, max=17018, avg=10165.99, stdev=1216.67 00:08:34.084 lat (usec): min=2801, max=17035, avg=10241.36, stdev=1237.71 00:08:34.084 clat percentiles (usec): 00:08:34.084 | 1.00th=[ 7242], 5.00th=[ 8717], 10.00th=[ 9241], 20.00th=[ 9503], 00:08:34.084 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10159], 00:08:34.084 | 70.00th=[10290], 80.00th=[10421], 90.00th=[11994], 95.00th=[12387], 00:08:34.084 | 99.00th=[13960], 99.50th=[14091], 99.90th=[15008], 99.95th=[16188], 00:08:34.084 | 99.99th=[16909] 00:08:34.084 bw ( KiB/s): min=24576, max=25240, per=22.92%, avg=24908.00, stdev=469.52, samples=2 00:08:34.084 iops : min= 6144, max= 6310, avg=6227.00, stdev=117.38, samples=2 00:08:34.084 lat (msec) : 4=0.14%, 10=44.59%, 20=55.27% 00:08:34.084 cpu : usr=2.79%, sys=10.08%, ctx=850, majf=0, minf=8 00:08:34.084 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:08:34.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:34.084 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:34.084 issued rwts: total=6144,6354,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:34.084 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:34.084 00:08:34.084 Run status group 0 (all jobs): 00:08:34.084 READ: bw=102MiB/s (107MB/s), 20.0MiB/s-29.4MiB/s (20.9MB/s-30.8MB/s), io=102MiB (107MB), run=1002-1004msec 00:08:34.084 WRITE: bw=106MiB/s (111MB/s), 21.7MiB/s-29.9MiB/s (22.8MB/s-31.4MB/s), io=107MiB (112MB), run=1002-1004msec 00:08:34.084 00:08:34.084 Disk stats (read/write): 00:08:34.084 nvme0n1: ios=6594/6656, merge=0/0, ticks=17839/16640, in_queue=34479, util=89.28% 00:08:34.084 nvme0n2: ios=6593/6656, merge=0/0, ticks=54300/49467, in_queue=103767, util=89.44% 00:08:34.084 nvme0n3: ios=4606/4616, merge=0/0, ticks=55371/49585, in_queue=104956, util=89.50% 00:08:34.084 nvme0n4: ios=5245/5632, merge=0/0, ticks=17303/17233, in_queue=34536, util=89.56% 00:08:34.084 19:42:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:08:34.084 [global] 00:08:34.084 thread=1 00:08:34.084 invalidate=1 00:08:34.084 rw=randwrite 00:08:34.084 time_based=1 00:08:34.084 runtime=1 00:08:34.084 ioengine=libaio 00:08:34.084 direct=1 00:08:34.084 bs=4096 00:08:34.084 iodepth=128 00:08:34.084 norandommap=0 00:08:34.084 numjobs=1 00:08:34.084 00:08:34.084 verify_dump=1 00:08:34.084 verify_backlog=512 00:08:34.084 verify_state_save=0 00:08:34.084 do_verify=1 00:08:34.084 verify=crc32c-intel 00:08:34.084 [job0] 00:08:34.084 filename=/dev/nvme0n1 00:08:34.084 [job1] 00:08:34.084 filename=/dev/nvme0n2 00:08:34.084 [job2] 00:08:34.084 filename=/dev/nvme0n3 00:08:34.084 [job3] 00:08:34.084 filename=/dev/nvme0n4 00:08:34.084 Could not set queue depth (nvme0n1) 00:08:34.084 Could not set queue depth (nvme0n2) 00:08:34.084 Could not set queue depth (nvme0n3) 00:08:34.084 Could not set queue depth (nvme0n4) 00:08:34.084 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:34.084 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:34.084 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:34.084 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:34.084 fio-3.35 00:08:34.084 Starting 4 threads 00:08:35.089 00:08:35.089 job0: (groupid=0, jobs=1): err= 0: pid=65470: Tue Nov 26 19:42:30 2024 00:08:35.089 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:08:35.089 slat (usec): min=3, max=6896, avg=168.81, stdev=663.75 00:08:35.089 clat (usec): min=11626, max=34770, avg=21186.71, stdev=2992.58 00:08:35.089 lat (usec): min=11639, max=34792, avg=21355.51, stdev=2992.88 00:08:35.089 clat percentiles (usec): 00:08:35.089 | 1.00th=[15664], 5.00th=[16909], 10.00th=[17957], 20.00th=[19268], 00:08:35.089 | 30.00th=[19530], 40.00th=[20055], 50.00th=[20579], 60.00th=[21365], 00:08:35.089 | 70.00th=[22414], 80.00th=[23462], 90.00th=[25035], 95.00th=[26346], 00:08:35.089 | 99.00th=[31851], 99.50th=[32900], 99.90th=[33817], 99.95th=[34866], 00:08:35.089 | 99.99th=[34866] 00:08:35.089 write: IOPS=3128, BW=12.2MiB/s (12.8MB/s)(12.2MiB/1001msec); 0 zone resets 00:08:35.089 slat (usec): min=5, max=12443, avg=149.18, stdev=603.30 00:08:35.089 clat (usec): min=237, max=34014, avg=19427.42, stdev=4245.52 00:08:35.089 lat (usec): min=1838, max=34034, avg=19576.60, stdev=4249.35 00:08:35.089 clat percentiles (usec): 00:08:35.089 | 1.00th=[ 2474], 5.00th=[13698], 10.00th=[15533], 20.00th=[16909], 00:08:35.089 | 30.00th=[17695], 40.00th=[18744], 50.00th=[19268], 60.00th=[19792], 00:08:35.089 | 70.00th=[20055], 80.00th=[20841], 90.00th=[23725], 95.00th=[28967], 00:08:35.089 | 99.00th=[33162], 99.50th=[33817], 99.90th=[33817], 99.95th=[33817], 00:08:35.089 | 99.99th=[33817] 00:08:35.089 bw ( KiB/s): min=12288, max=12288, per=13.09%, avg=12288.00, stdev= 0.00, samples=1 00:08:35.089 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:08:35.089 lat (usec) : 250=0.02% 00:08:35.089 lat (msec) : 2=0.11%, 4=0.44%, 10=0.15%, 20=52.79%, 50=46.50% 00:08:35.089 cpu : usr=1.50%, sys=5.70%, ctx=1027, majf=0, minf=17 00:08:35.089 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:08:35.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:35.089 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:35.089 issued rwts: total=3072,3132,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:35.089 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:35.089 job1: (groupid=0, jobs=1): err= 0: pid=65471: Tue Nov 26 19:42:30 2024 00:08:35.089 read: IOPS=8686, BW=33.9MiB/s (35.6MB/s)(34.0MiB/1002msec) 00:08:35.089 slat (usec): min=3, max=1810, avg=54.56, stdev=261.11 00:08:35.089 clat (usec): min=5501, max=8073, avg=7286.53, stdev=319.13 00:08:35.089 lat (usec): min=6695, max=8084, avg=7341.08, stdev=187.57 00:08:35.089 clat percentiles (usec): 00:08:35.089 | 1.00th=[ 5735], 5.00th=[ 6980], 10.00th=[ 7046], 20.00th=[ 7177], 00:08:35.089 | 30.00th=[ 7242], 40.00th=[ 7242], 50.00th=[ 7308], 60.00th=[ 7373], 00:08:35.089 | 70.00th=[ 7439], 80.00th=[ 7504], 90.00th=[ 7570], 95.00th=[ 7635], 00:08:35.089 | 99.00th=[ 7832], 99.50th=[ 7898], 99.90th=[ 8029], 99.95th=[ 8029], 00:08:35.089 | 99.99th=[ 8094] 00:08:35.089 write: IOPS=9070, BW=35.4MiB/s (37.2MB/s)(35.5MiB/1002msec); 0 zone resets 00:08:35.089 slat (usec): min=5, max=1689, avg=53.82, stdev=228.61 00:08:35.089 clat (usec): min=205, max=7853, avg=6975.48, stdev=492.63 00:08:35.089 lat (usec): min=1488, max=7868, avg=7029.30, stdev=436.61 00:08:35.089 clat percentiles (usec): 00:08:35.089 | 1.00th=[ 5342], 5.00th=[ 6652], 10.00th=[ 6783], 20.00th=[ 6849], 00:08:35.089 | 30.00th=[ 6980], 40.00th=[ 6980], 50.00th=[ 7046], 60.00th=[ 7046], 00:08:35.089 | 70.00th=[ 7111], 80.00th=[ 7177], 90.00th=[ 7242], 95.00th=[ 7373], 00:08:35.089 | 99.00th=[ 7570], 99.50th=[ 7701], 99.90th=[ 7832], 99.95th=[ 7832], 00:08:35.089 | 99.99th=[ 7832] 00:08:35.089 bw ( KiB/s): min=34832, max=36864, per=38.20%, avg=35848.00, stdev=1436.84, samples=2 00:08:35.089 iops : min= 8708, max= 9216, avg=8962.00, stdev=359.21, samples=2 00:08:35.089 lat (usec) : 250=0.01% 00:08:35.089 lat (msec) : 2=0.18%, 4=0.18%, 10=99.63% 00:08:35.089 cpu : usr=3.30%, sys=14.09%, ctx=558, majf=0, minf=9 00:08:35.089 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:08:35.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:35.089 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:35.089 issued rwts: total=8704,9089,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:35.089 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:35.089 job2: (groupid=0, jobs=1): err= 0: pid=65472: Tue Nov 26 19:42:30 2024 00:08:35.089 read: IOPS=7657, BW=29.9MiB/s (31.4MB/s)(30.0MiB/1003msec) 00:08:35.089 slat (usec): min=4, max=3927, avg=64.60, stdev=308.29 00:08:35.089 clat (usec): min=4869, max=12224, avg=8351.95, stdev=847.31 00:08:35.089 lat (usec): min=4879, max=14419, avg=8416.54, stdev=863.58 00:08:35.089 clat percentiles (usec): 00:08:35.089 | 1.00th=[ 5735], 5.00th=[ 6915], 10.00th=[ 7504], 20.00th=[ 8029], 00:08:35.089 | 30.00th=[ 8160], 40.00th=[ 8291], 50.00th=[ 8356], 60.00th=[ 8455], 00:08:35.089 | 70.00th=[ 8586], 80.00th=[ 8717], 90.00th=[ 9110], 95.00th=[ 9896], 00:08:35.089 | 99.00th=[11076], 99.50th=[11600], 99.90th=[11863], 99.95th=[11994], 00:08:35.089 | 99.99th=[12256] 00:08:35.089 write: IOPS=7864, BW=30.7MiB/s (32.2MB/s)(30.8MiB/1003msec); 0 zone resets 00:08:35.089 slat (usec): min=5, max=3799, avg=59.64, stdev=319.55 00:08:35.089 clat (usec): min=193, max=11857, avg=7965.37, stdev=976.58 00:08:35.089 lat (usec): min=3036, max=12142, avg=8025.02, stdev=1016.34 00:08:35.089 clat percentiles (usec): 00:08:35.089 | 1.00th=[ 3884], 5.00th=[ 6390], 10.00th=[ 7242], 20.00th=[ 7635], 00:08:35.089 | 30.00th=[ 7767], 40.00th=[ 7898], 50.00th=[ 8029], 60.00th=[ 8094], 00:08:35.089 | 70.00th=[ 8225], 80.00th=[ 8356], 90.00th=[ 8586], 95.00th=[ 9503], 00:08:35.089 | 99.00th=[11207], 99.50th=[11469], 99.90th=[11863], 99.95th=[11863], 00:08:35.089 | 99.99th=[11863] 00:08:35.089 bw ( KiB/s): min=29704, max=32376, per=33.07%, avg=31040.00, stdev=1889.39, samples=2 00:08:35.089 iops : min= 7426, max= 8094, avg=7760.00, stdev=472.35, samples=2 00:08:35.089 lat (usec) : 250=0.01% 00:08:35.089 lat (msec) : 4=0.56%, 10=95.22%, 20=4.21% 00:08:35.089 cpu : usr=4.19%, sys=12.28%, ctx=665, majf=0, minf=13 00:08:35.089 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:08:35.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:35.089 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:35.089 issued rwts: total=7680,7888,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:35.089 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:35.089 job3: (groupid=0, jobs=1): err= 0: pid=65473: Tue Nov 26 19:42:30 2024 00:08:35.089 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:08:35.089 slat (usec): min=3, max=6272, avg=166.73, stdev=611.62 00:08:35.089 clat (usec): min=13269, max=27860, avg=20933.53, stdev=2650.96 00:08:35.089 lat (usec): min=13727, max=28112, avg=21100.27, stdev=2658.04 00:08:35.089 clat percentiles (usec): 00:08:35.090 | 1.00th=[14222], 5.00th=[16188], 10.00th=[17171], 20.00th=[19268], 00:08:35.090 | 30.00th=[19792], 40.00th=[20055], 50.00th=[20579], 60.00th=[21627], 00:08:35.090 | 70.00th=[22414], 80.00th=[23200], 90.00th=[24511], 95.00th=[25297], 00:08:35.090 | 99.00th=[26870], 99.50th=[27657], 99.90th=[27919], 99.95th=[27919], 00:08:35.090 | 99.99th=[27919] 00:08:35.090 write: IOPS=3413, BW=13.3MiB/s (14.0MB/s)(13.4MiB/1003msec); 0 zone resets 00:08:35.090 slat (usec): min=5, max=12659, avg=137.98, stdev=604.40 00:08:35.090 clat (usec): min=2383, max=32977, avg=18219.04, stdev=3912.42 00:08:35.090 lat (usec): min=7058, max=32998, avg=18357.02, stdev=3923.77 00:08:35.090 clat percentiles (usec): 00:08:35.090 | 1.00th=[10421], 5.00th=[12518], 10.00th=[14091], 20.00th=[14615], 00:08:35.090 | 30.00th=[15664], 40.00th=[16909], 50.00th=[17957], 60.00th=[19006], 00:08:35.090 | 70.00th=[20055], 80.00th=[21103], 90.00th=[22938], 95.00th=[25035], 00:08:35.090 | 99.00th=[30278], 99.50th=[32113], 99.90th=[32900], 99.95th=[32900], 00:08:35.090 | 99.99th=[32900] 00:08:35.090 bw ( KiB/s): min=12256, max=14120, per=14.05%, avg=13188.00, stdev=1318.05, samples=2 00:08:35.090 iops : min= 3064, max= 3530, avg=3297.00, stdev=329.51, samples=2 00:08:35.090 lat (msec) : 4=0.02%, 10=0.18%, 20=53.42%, 50=46.38% 00:08:35.090 cpu : usr=1.70%, sys=6.29%, ctx=886, majf=0, minf=9 00:08:35.090 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:08:35.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:35.090 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:35.090 issued rwts: total=3072,3424,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:35.090 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:35.090 00:08:35.090 Run status group 0 (all jobs): 00:08:35.090 READ: bw=87.7MiB/s (92.0MB/s), 12.0MiB/s-33.9MiB/s (12.5MB/s-35.6MB/s), io=88.0MiB (92.3MB), run=1001-1003msec 00:08:35.090 WRITE: bw=91.7MiB/s (96.1MB/s), 12.2MiB/s-35.4MiB/s (12.8MB/s-37.2MB/s), io=91.9MiB (96.4MB), run=1001-1003msec 00:08:35.090 00:08:35.090 Disk stats (read/write): 00:08:35.090 nvme0n1: ios=2610/2946, merge=0/0, ticks=17172/18110, in_queue=35282, util=89.48% 00:08:35.090 nvme0n2: ios=7729/8000, merge=0/0, ticks=12565/12020, in_queue=24585, util=90.14% 00:08:35.090 nvme0n3: ios=6696/7003, merge=0/0, ticks=26823/24083, in_queue=50906, util=90.25% 00:08:35.090 nvme0n4: ios=2693/3072, merge=0/0, ticks=18345/17012, in_queue=35357, util=89.92% 00:08:35.090 19:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:08:35.090 19:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=65493 00:08:35.090 19:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:08:35.090 19:42:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:08:35.090 [global] 00:08:35.090 thread=1 00:08:35.090 invalidate=1 00:08:35.090 rw=read 00:08:35.090 time_based=1 00:08:35.090 runtime=10 00:08:35.090 ioengine=libaio 00:08:35.090 direct=1 00:08:35.090 bs=4096 00:08:35.090 iodepth=1 00:08:35.090 norandommap=1 00:08:35.090 numjobs=1 00:08:35.090 00:08:35.090 [job0] 00:08:35.090 filename=/dev/nvme0n1 00:08:35.090 [job1] 00:08:35.090 filename=/dev/nvme0n2 00:08:35.090 [job2] 00:08:35.090 filename=/dev/nvme0n3 00:08:35.090 [job3] 00:08:35.090 filename=/dev/nvme0n4 00:08:35.090 Could not set queue depth (nvme0n1) 00:08:35.090 Could not set queue depth (nvme0n2) 00:08:35.090 Could not set queue depth (nvme0n3) 00:08:35.090 Could not set queue depth (nvme0n4) 00:08:35.411 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:35.411 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:35.411 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:35.411 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:35.411 fio-3.35 00:08:35.411 Starting 4 threads 00:08:38.687 19:42:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:08:38.687 fio: pid=65536, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:38.687 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=51212288, buflen=4096 00:08:38.687 19:42:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:08:38.687 fio: pid=65535, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:38.687 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=88485888, buflen=4096 00:08:38.687 19:42:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:38.687 19:42:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:08:38.687 fio: pid=65533, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:38.687 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=61829120, buflen=4096 00:08:38.687 19:42:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:38.688 19:42:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:08:38.945 fio: pid=65534, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:38.945 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=26120192, buflen=4096 00:08:38.945 00:08:38.945 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=65533: Tue Nov 26 19:42:34 2024 00:08:38.945 read: IOPS=4523, BW=17.7MiB/s (18.5MB/s)(59.0MiB/3337msec) 00:08:38.945 slat (usec): min=4, max=14164, avg=10.53, stdev=199.85 00:08:38.945 clat (usec): min=84, max=1581, avg=209.56, stdev=64.12 00:08:38.945 lat (usec): min=90, max=14295, avg=220.09, stdev=208.79 00:08:38.945 clat percentiles (usec): 00:08:38.945 | 1.00th=[ 101], 5.00th=[ 110], 10.00th=[ 119], 20.00th=[ 180], 00:08:38.945 | 30.00th=[ 200], 40.00th=[ 206], 50.00th=[ 212], 60.00th=[ 221], 00:08:38.945 | 70.00th=[ 229], 80.00th=[ 241], 90.00th=[ 262], 95.00th=[ 285], 00:08:38.945 | 99.00th=[ 383], 99.50th=[ 490], 99.90th=[ 750], 99.95th=[ 988], 00:08:38.945 | 99.99th=[ 1418] 00:08:38.945 bw ( KiB/s): min=16656, max=17944, per=21.09%, avg=17061.33, stdev=474.76, samples=6 00:08:38.945 iops : min= 4164, max= 4486, avg=4265.33, stdev=118.69, samples=6 00:08:38.945 lat (usec) : 100=0.86%, 250=84.42%, 500=14.22%, 750=0.39%, 1000=0.05% 00:08:38.945 lat (msec) : 2=0.05% 00:08:38.945 cpu : usr=0.54%, sys=3.36%, ctx=15100, majf=0, minf=1 00:08:38.945 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:38.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:38.945 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:38.945 issued rwts: total=15096,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:38.945 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:38.945 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=65534: Tue Nov 26 19:42:34 2024 00:08:38.945 read: IOPS=6395, BW=25.0MiB/s (26.2MB/s)(88.9MiB/3559msec) 00:08:38.945 slat (usec): min=4, max=18559, avg=10.97, stdev=210.16 00:08:38.945 clat (usec): min=73, max=12083, avg=144.64, stdev=92.21 00:08:38.945 lat (usec): min=91, max=18738, avg=155.61, stdev=229.71 00:08:38.945 clat percentiles (usec): 00:08:38.945 | 1.00th=[ 94], 5.00th=[ 103], 10.00th=[ 113], 20.00th=[ 125], 00:08:38.945 | 30.00th=[ 133], 40.00th=[ 137], 50.00th=[ 141], 60.00th=[ 147], 00:08:38.945 | 70.00th=[ 153], 80.00th=[ 161], 90.00th=[ 176], 95.00th=[ 190], 00:08:38.945 | 99.00th=[ 223], 99.50th=[ 251], 99.90th=[ 392], 99.95th=[ 594], 00:08:38.945 | 99.99th=[ 2245] 00:08:38.945 bw ( KiB/s): min=24272, max=26072, per=31.13%, avg=25180.00, stdev=659.17, samples=6 00:08:38.945 iops : min= 6068, max= 6518, avg=6295.00, stdev=164.79, samples=6 00:08:38.945 lat (usec) : 100=3.46%, 250=96.03%, 500=0.44%, 750=0.02%, 1000=0.01% 00:08:38.945 lat (msec) : 2=0.02%, 4=0.01%, 10=0.01%, 20=0.01% 00:08:38.945 cpu : usr=0.96%, sys=4.69%, ctx=22770, majf=0, minf=2 00:08:38.945 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:38.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:38.945 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:38.945 issued rwts: total=22762,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:38.945 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:38.945 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=65535: Tue Nov 26 19:42:34 2024 00:08:38.945 read: IOPS=6904, BW=27.0MiB/s (28.3MB/s)(84.4MiB/3129msec) 00:08:38.945 slat (usec): min=4, max=9914, avg= 7.53, stdev=85.01 00:08:38.945 clat (usec): min=82, max=3418, avg=136.66, stdev=71.23 00:08:38.945 lat (usec): min=87, max=10071, avg=144.19, stdev=111.18 00:08:38.945 clat percentiles (usec): 00:08:38.945 | 1.00th=[ 95], 5.00th=[ 103], 10.00th=[ 109], 20.00th=[ 115], 00:08:38.945 | 30.00th=[ 120], 40.00th=[ 125], 50.00th=[ 129], 60.00th=[ 135], 00:08:38.945 | 70.00th=[ 143], 80.00th=[ 153], 90.00th=[ 169], 95.00th=[ 186], 00:08:38.945 | 99.00th=[ 223], 99.50th=[ 253], 99.90th=[ 611], 99.95th=[ 2278], 00:08:38.945 | 99.99th=[ 3097] 00:08:38.945 bw ( KiB/s): min=24960, max=30824, per=34.21%, avg=27668.00, stdev=2389.82, samples=6 00:08:38.945 iops : min= 6240, max= 7706, avg=6917.00, stdev=597.45, samples=6 00:08:38.945 lat (usec) : 100=3.23%, 250=96.25%, 500=0.40%, 750=0.02%, 1000=0.01% 00:08:38.945 lat (msec) : 2=0.04%, 4=0.05% 00:08:38.945 cpu : usr=0.86%, sys=4.38%, ctx=21614, majf=0, minf=2 00:08:38.945 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:38.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:38.945 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:38.945 issued rwts: total=21604,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:38.945 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:38.945 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=65536: Tue Nov 26 19:42:34 2024 00:08:38.945 read: IOPS=4269, BW=16.7MiB/s (17.5MB/s)(48.8MiB/2929msec) 00:08:38.945 slat (usec): min=4, max=100, avg= 7.71, stdev= 4.10 00:08:38.945 clat (usec): min=94, max=1566, avg=225.50, stdev=48.82 00:08:38.945 lat (usec): min=100, max=1576, avg=233.20, stdev=49.02 00:08:38.945 clat percentiles (usec): 00:08:38.945 | 1.00th=[ 153], 5.00th=[ 184], 10.00th=[ 192], 20.00th=[ 200], 00:08:38.945 | 30.00th=[ 206], 40.00th=[ 212], 50.00th=[ 219], 60.00th=[ 225], 00:08:38.945 | 70.00th=[ 233], 80.00th=[ 245], 90.00th=[ 265], 95.00th=[ 285], 00:08:38.945 | 99.00th=[ 379], 99.50th=[ 515], 99.90th=[ 783], 99.95th=[ 914], 00:08:38.945 | 99.99th=[ 1221] 00:08:38.945 bw ( KiB/s): min=16936, max=18016, per=21.33%, avg=17252.80, stdev=433.24, samples=5 00:08:38.945 iops : min= 4234, max= 4504, avg=4313.20, stdev=108.31, samples=5 00:08:38.945 lat (usec) : 100=0.02%, 250=83.56%, 500=15.89%, 750=0.41%, 1000=0.09% 00:08:38.945 lat (msec) : 2=0.02% 00:08:38.945 cpu : usr=0.58%, sys=3.31%, ctx=12504, majf=0, minf=2 00:08:38.945 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:38.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:38.945 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:38.945 issued rwts: total=12504,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:38.945 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:38.945 00:08:38.945 Run status group 0 (all jobs): 00:08:38.945 READ: bw=79.0MiB/s (82.8MB/s), 16.7MiB/s-27.0MiB/s (17.5MB/s-28.3MB/s), io=281MiB (295MB), run=2929-3559msec 00:08:38.945 00:08:38.945 Disk stats (read/write): 00:08:38.945 nvme0n1: ios=13727/0, merge=0/0, ticks=3004/0, in_queue=3004, util=95.78% 00:08:38.945 nvme0n2: ios=21326/0, merge=0/0, ticks=3156/0, in_queue=3156, util=95.10% 00:08:38.945 nvme0n3: ios=20080/0, merge=0/0, ticks=2726/0, in_queue=2726, util=96.38% 00:08:38.945 nvme0n4: ios=12329/0, merge=0/0, ticks=2803/0, in_queue=2803, util=96.75% 00:08:38.945 19:42:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:38.945 19:42:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:08:39.203 19:42:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:39.203 19:42:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:08:39.460 19:42:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:39.460 19:42:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:08:39.717 19:42:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:39.717 19:42:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:08:39.974 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:39.974 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:08:40.232 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:08:40.232 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 65493 00:08:40.232 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:08:40.232 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:40.232 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.232 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:40.232 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:08:40.232 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:40.232 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:40.232 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:40.232 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:40.232 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:08:40.232 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:08:40.232 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:08:40.232 nvmf hotplug test: fio failed as expected 00:08:40.232 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:40.232 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:08:40.232 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:08:40.232 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:08:40.232 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:08:40.233 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:08:40.233 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:40.233 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:08:40.490 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:40.490 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:08:40.490 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:40.490 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:40.490 rmmod nvme_tcp 00:08:40.490 rmmod nvme_fabrics 00:08:40.490 rmmod nvme_keyring 00:08:40.490 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:40.490 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:08:40.490 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:08:40.490 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 65116 ']' 00:08:40.490 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 65116 00:08:40.490 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 65116 ']' 00:08:40.490 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 65116 00:08:40.490 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:08:40.490 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:40.490 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65116 00:08:40.490 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:40.490 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:40.490 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65116' 00:08:40.490 killing process with pid 65116 00:08:40.490 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 65116 00:08:40.490 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 65116 00:08:40.490 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:40.490 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:40.490 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:40.490 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:08:40.490 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:08:40.490 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:40.490 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:08:40.490 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:40.490 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:40.490 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:40.490 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:40.490 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:40.490 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:40.490 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:40.490 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:40.490 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:40.490 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:40.490 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:40.747 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:40.747 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:40.747 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:40.747 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:40.747 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:40.747 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:40.747 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:40.747 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:40.747 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:08:40.747 00:08:40.747 real 0m17.812s 00:08:40.747 user 1m7.175s 00:08:40.747 sys 0m8.008s 00:08:40.747 ************************************ 00:08:40.747 END TEST nvmf_fio_target 00:08:40.747 ************************************ 00:08:40.747 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:40.747 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:40.747 19:42:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:08:40.748 19:42:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:40.748 19:42:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.748 19:42:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:40.748 ************************************ 00:08:40.748 START TEST nvmf_bdevio 00:08:40.748 ************************************ 00:08:40.748 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:08:40.748 * Looking for test storage... 00:08:40.748 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:40.748 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:40.748 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:08:40.748 19:42:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:41.005 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:41.005 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:41.005 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:41.005 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:41.005 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:08:41.005 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:08:41.005 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:08:41.005 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:08:41.005 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:08:41.005 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:08:41.005 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:08:41.005 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:41.005 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:08:41.005 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:08:41.005 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:41.005 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:41.005 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:08:41.005 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:08:41.005 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:41.005 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:08:41.005 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:08:41.005 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:08:41.005 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:08:41.005 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:41.005 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:08:41.005 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:08:41.005 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:41.005 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:41.005 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:08:41.005 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:41.005 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:41.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.005 --rc genhtml_branch_coverage=1 00:08:41.005 --rc genhtml_function_coverage=1 00:08:41.005 --rc genhtml_legend=1 00:08:41.005 --rc geninfo_all_blocks=1 00:08:41.005 --rc geninfo_unexecuted_blocks=1 00:08:41.005 00:08:41.005 ' 00:08:41.005 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:41.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.005 --rc genhtml_branch_coverage=1 00:08:41.005 --rc genhtml_function_coverage=1 00:08:41.005 --rc genhtml_legend=1 00:08:41.005 --rc geninfo_all_blocks=1 00:08:41.005 --rc geninfo_unexecuted_blocks=1 00:08:41.005 00:08:41.005 ' 00:08:41.005 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:41.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.005 --rc genhtml_branch_coverage=1 00:08:41.005 --rc genhtml_function_coverage=1 00:08:41.005 --rc genhtml_legend=1 00:08:41.005 --rc geninfo_all_blocks=1 00:08:41.005 --rc geninfo_unexecuted_blocks=1 00:08:41.005 00:08:41.005 ' 00:08:41.005 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:41.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.006 --rc genhtml_branch_coverage=1 00:08:41.006 --rc genhtml_function_coverage=1 00:08:41.006 --rc genhtml_legend=1 00:08:41.006 --rc geninfo_all_blocks=1 00:08:41.006 --rc geninfo_unexecuted_blocks=1 00:08:41.006 00:08:41.006 ' 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=91838eb1-5852-43eb-90b2-09876f360ab2 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:41.006 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:41.006 Cannot find device "nvmf_init_br" 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:41.006 Cannot find device "nvmf_init_br2" 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:41.006 Cannot find device "nvmf_tgt_br" 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:41.006 Cannot find device "nvmf_tgt_br2" 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:41.006 Cannot find device "nvmf_init_br" 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:41.006 Cannot find device "nvmf_init_br2" 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:41.006 Cannot find device "nvmf_tgt_br" 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:41.006 Cannot find device "nvmf_tgt_br2" 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:41.006 Cannot find device "nvmf_br" 00:08:41.006 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:08:41.007 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:41.007 Cannot find device "nvmf_init_if" 00:08:41.007 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:08:41.007 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:41.007 Cannot find device "nvmf_init_if2" 00:08:41.007 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:08:41.007 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:41.007 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:41.007 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:08:41.007 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:41.007 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:41.007 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:08:41.007 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:41.007 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:41.007 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:41.007 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:41.007 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:41.007 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:41.007 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:41.263 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:41.263 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:41.263 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:41.263 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:41.264 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:41.264 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:41.264 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:41.264 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:41.264 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:41.264 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:41.264 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:41.264 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:41.264 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:41.264 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:41.264 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:41.264 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:41.264 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:41.264 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:41.264 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:41.264 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:41.264 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:41.264 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:41.264 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:41.264 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:41.264 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:41.264 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:41.264 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:41.264 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.098 ms 00:08:41.264 00:08:41.264 --- 10.0.0.3 ping statistics --- 00:08:41.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.264 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:08:41.264 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:41.264 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:41.264 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:08:41.264 00:08:41.264 --- 10.0.0.4 ping statistics --- 00:08:41.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.264 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:08:41.264 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:41.264 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:41.264 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:08:41.264 00:08:41.264 --- 10.0.0.1 ping statistics --- 00:08:41.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.264 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:08:41.264 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:41.264 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:41.264 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.040 ms 00:08:41.264 00:08:41.264 --- 10.0.0.2 ping statistics --- 00:08:41.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.264 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:08:41.264 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:41.264 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:08:41.264 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:41.264 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:41.264 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:41.264 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:41.264 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:41.264 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:41.264 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:41.264 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:08:41.264 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:41.264 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:41.264 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:41.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.264 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=65859 00:08:41.264 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 65859 00:08:41.264 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 65859 ']' 00:08:41.264 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.264 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:41.264 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.264 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:41.264 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:41.264 19:42:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:08:41.264 [2024-11-26 19:42:36.417307] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:08:41.264 [2024-11-26 19:42:36.417493] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:41.521 [2024-11-26 19:42:36.560279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:41.521 [2024-11-26 19:42:36.596748] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:41.521 [2024-11-26 19:42:36.597365] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:41.521 [2024-11-26 19:42:36.597542] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:41.521 [2024-11-26 19:42:36.597703] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:41.521 [2024-11-26 19:42:36.597817] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:41.521 [2024-11-26 19:42:36.598683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:41.521 [2024-11-26 19:42:36.598752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:41.521 [2024-11-26 19:42:36.598922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:41.521 [2024-11-26 19:42:36.598849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:41.521 [2024-11-26 19:42:36.630625] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:42.087 19:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:42.087 19:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:08:42.087 19:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:42.087 19:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:42.087 19:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:42.087 19:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:42.087 19:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:42.087 19:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.087 19:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:42.087 [2024-11-26 19:42:37.282395] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:42.087 19:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.087 19:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:42.087 19:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.087 19:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:42.087 Malloc0 00:08:42.087 19:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.087 19:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:42.087 19:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.087 19:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:42.087 19:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.087 19:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:42.087 19:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.087 19:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:42.354 19:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.354 19:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:42.354 19:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.354 19:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:42.354 [2024-11-26 19:42:37.343737] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:42.354 19:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.354 19:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:08:42.354 19:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:08:42.354 19:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:08:42.354 19:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:08:42.354 19:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:42.354 19:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:42.354 { 00:08:42.354 "params": { 00:08:42.354 "name": "Nvme$subsystem", 00:08:42.354 "trtype": "$TEST_TRANSPORT", 00:08:42.354 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:42.354 "adrfam": "ipv4", 00:08:42.354 "trsvcid": "$NVMF_PORT", 00:08:42.354 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:42.354 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:42.354 "hdgst": ${hdgst:-false}, 00:08:42.354 "ddgst": ${ddgst:-false} 00:08:42.354 }, 00:08:42.354 "method": "bdev_nvme_attach_controller" 00:08:42.354 } 00:08:42.354 EOF 00:08:42.354 )") 00:08:42.354 19:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:08:42.354 19:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:08:42.354 19:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:08:42.354 19:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:42.354 "params": { 00:08:42.354 "name": "Nvme1", 00:08:42.354 "trtype": "tcp", 00:08:42.354 "traddr": "10.0.0.3", 00:08:42.354 "adrfam": "ipv4", 00:08:42.354 "trsvcid": "4420", 00:08:42.354 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:42.354 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:42.354 "hdgst": false, 00:08:42.354 "ddgst": false 00:08:42.354 }, 00:08:42.354 "method": "bdev_nvme_attach_controller" 00:08:42.354 }' 00:08:42.354 [2024-11-26 19:42:37.390842] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:08:42.354 [2024-11-26 19:42:37.391038] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65894 ] 00:08:42.354 [2024-11-26 19:42:37.536567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:42.354 [2024-11-26 19:42:37.574382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:42.354 [2024-11-26 19:42:37.574725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:42.354 [2024-11-26 19:42:37.574726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.611 [2024-11-26 19:42:37.614723] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:42.611 I/O targets: 00:08:42.611 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:08:42.611 00:08:42.611 00:08:42.611 CUnit - A unit testing framework for C - Version 2.1-3 00:08:42.611 http://cunit.sourceforge.net/ 00:08:42.611 00:08:42.611 00:08:42.611 Suite: bdevio tests on: Nvme1n1 00:08:42.611 Test: blockdev write read block ...passed 00:08:42.611 Test: blockdev write zeroes read block ...passed 00:08:42.611 Test: blockdev write zeroes read no split ...passed 00:08:42.611 Test: blockdev write zeroes read split ...passed 00:08:42.611 Test: blockdev write zeroes read split partial ...passed 00:08:42.611 Test: blockdev reset ...[2024-11-26 19:42:37.751738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:08:42.611 [2024-11-26 19:42:37.751962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9190 (9): Bad file descriptor 00:08:42.611 [2024-11-26 19:42:37.764011] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:08:42.611 passed 00:08:42.611 Test: blockdev write read 8 blocks ...passed 00:08:42.611 Test: blockdev write read size > 128k ...passed 00:08:42.611 Test: blockdev write read invalid size ...passed 00:08:42.611 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:42.611 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:42.611 Test: blockdev write read max offset ...passed 00:08:42.611 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:42.611 Test: blockdev writev readv 8 blocks ...passed 00:08:42.611 Test: blockdev writev readv 30 x 1block ...passed 00:08:42.611 Test: blockdev writev readv block ...passed 00:08:42.611 Test: blockdev writev readv size > 128k ...passed 00:08:42.611 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:42.611 Test: blockdev comparev and writev ...[2024-11-26 19:42:37.770576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:42.612 [2024-11-26 19:42:37.770711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:08:42.612 [2024-11-26 19:42:37.770843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:42.612 [2024-11-26 19:42:37.770913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:08:42.612 [2024-11-26 19:42:37.771271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:42.612 [2024-11-26 19:42:37.771337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:08:42.612 [2024-11-26 19:42:37.771403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:42.612 [2024-11-26 19:42:37.771451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:08:42.612 [2024-11-26 19:42:37.771675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:42.612 [2024-11-26 19:42:37.771688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:08:42.612 [2024-11-26 19:42:37.771698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:42.612 [2024-11-26 19:42:37.771703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:08:42.612 [2024-11-26 19:42:37.771921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:42.612 [2024-11-26 19:42:37.771936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:08:42.612 [2024-11-26 19:42:37.771946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:42.612 [2024-11-26 19:42:37.771950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:08:42.612 passed 00:08:42.612 Test: blockdev nvme passthru rw ...passed 00:08:42.612 Test: blockdev nvme passthru vendor specific ...passed 00:08:42.612 Test: blockdev nvme admin passthru ...[2024-11-26 19:42:37.772598] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:08:42.612 [2024-11-26 19:42:37.772613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:08:42.612 [2024-11-26 19:42:37.772677] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:08:42.612 [2024-11-26 19:42:37.772683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:08:42.612 [2024-11-26 19:42:37.772744] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:08:42.612 [2024-11-26 19:42:37.772750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:08:42.612 [2024-11-26 19:42:37.772820] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:08:42.612 [2024-11-26 19:42:37.772827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:08:42.612 passed 00:08:42.612 Test: blockdev copy ...passed 00:08:42.612 00:08:42.612 Run Summary: Type Total Ran Passed Failed Inactive 00:08:42.612 suites 1 1 n/a 0 0 00:08:42.612 tests 23 23 23 0 0 00:08:42.612 asserts 152 152 152 0 n/a 00:08:42.612 00:08:42.612 Elapsed time = 0.138 seconds 00:08:42.941 19:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:42.941 19:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.941 19:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:42.941 19:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.941 19:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:08:42.941 19:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:08:42.941 19:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:42.941 19:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:08:42.941 19:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:42.941 19:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:08:42.941 19:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:42.941 19:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:42.941 rmmod nvme_tcp 00:08:42.941 rmmod nvme_fabrics 00:08:42.941 rmmod nvme_keyring 00:08:42.941 19:42:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:42.941 19:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:08:42.941 19:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:08:42.941 19:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 65859 ']' 00:08:42.941 19:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 65859 00:08:42.941 19:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 65859 ']' 00:08:42.941 19:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 65859 00:08:42.941 19:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:08:42.941 19:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:42.941 19:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65859 00:08:42.941 19:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:08:42.941 killing process with pid 65859 00:08:42.941 19:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:08:42.941 19:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65859' 00:08:42.941 19:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 65859 00:08:42.941 19:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 65859 00:08:43.223 19:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:43.223 19:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:43.223 19:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:43.223 19:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:08:43.223 19:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:43.223 19:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:08:43.223 19:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:08:43.223 19:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:43.223 19:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:43.223 19:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:43.223 19:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:43.223 19:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:43.223 19:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:43.223 19:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:43.223 19:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:43.223 19:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:43.223 19:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:43.223 19:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:43.223 19:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:43.223 19:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:43.223 19:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:43.223 19:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:43.223 19:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:43.223 19:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.223 19:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:43.223 19:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.223 19:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:08:43.223 00:08:43.223 real 0m2.485s 00:08:43.223 user 0m7.362s 00:08:43.223 sys 0m0.614s 00:08:43.223 19:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:43.223 ************************************ 00:08:43.223 END TEST nvmf_bdevio 00:08:43.223 ************************************ 00:08:43.223 19:42:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:43.223 19:42:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:43.223 ************************************ 00:08:43.223 END TEST nvmf_target_core 00:08:43.223 ************************************ 00:08:43.223 00:08:43.223 real 2m25.997s 00:08:43.223 user 6m30.056s 00:08:43.223 sys 0m40.561s 00:08:43.223 19:42:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:43.223 19:42:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:43.223 19:42:38 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:08:43.223 19:42:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:43.223 19:42:38 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:43.223 19:42:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:43.223 ************************************ 00:08:43.223 START TEST nvmf_target_extra 00:08:43.223 ************************************ 00:08:43.223 19:42:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:08:43.482 * Looking for test storage... 00:08:43.482 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:43.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.482 --rc genhtml_branch_coverage=1 00:08:43.482 --rc genhtml_function_coverage=1 00:08:43.482 --rc genhtml_legend=1 00:08:43.482 --rc geninfo_all_blocks=1 00:08:43.482 --rc geninfo_unexecuted_blocks=1 00:08:43.482 00:08:43.482 ' 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:43.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.482 --rc genhtml_branch_coverage=1 00:08:43.482 --rc genhtml_function_coverage=1 00:08:43.482 --rc genhtml_legend=1 00:08:43.482 --rc geninfo_all_blocks=1 00:08:43.482 --rc geninfo_unexecuted_blocks=1 00:08:43.482 00:08:43.482 ' 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:43.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.482 --rc genhtml_branch_coverage=1 00:08:43.482 --rc genhtml_function_coverage=1 00:08:43.482 --rc genhtml_legend=1 00:08:43.482 --rc geninfo_all_blocks=1 00:08:43.482 --rc geninfo_unexecuted_blocks=1 00:08:43.482 00:08:43.482 ' 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:43.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.482 --rc genhtml_branch_coverage=1 00:08:43.482 --rc genhtml_function_coverage=1 00:08:43.482 --rc genhtml_legend=1 00:08:43.482 --rc geninfo_all_blocks=1 00:08:43.482 --rc geninfo_unexecuted_blocks=1 00:08:43.482 00:08:43.482 ' 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=91838eb1-5852-43eb-90b2-09876f360ab2 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:43.482 19:42:38 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.483 19:42:38 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.483 19:42:38 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.483 19:42:38 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:08:43.483 19:42:38 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.483 19:42:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:08:43.483 19:42:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:43.483 19:42:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:43.483 19:42:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:43.483 19:42:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:43.483 19:42:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:43.483 19:42:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:43.483 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:43.483 19:42:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:43.483 19:42:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:43.483 19:42:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:43.483 19:42:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:43.483 19:42:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:08:43.483 19:42:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:08:43.483 19:42:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:08:43.483 19:42:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:43.483 19:42:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:43.483 19:42:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:08:43.483 ************************************ 00:08:43.483 START TEST nvmf_auth_target 00:08:43.483 ************************************ 00:08:43.483 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:08:43.483 * Looking for test storage... 00:08:43.483 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:43.483 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:43.483 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:08:43.483 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:43.483 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:43.483 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:43.483 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:43.483 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:43.483 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:08:43.483 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:08:43.483 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:08:43.483 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:08:43.483 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:08:43.483 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:08:43.483 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:08:43.483 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:43.483 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:08:43.483 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:08:43.483 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:43.483 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:43.483 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:08:43.483 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:08:43.483 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:43.483 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:08:43.483 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:08:43.742 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:08:43.742 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:08:43.742 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:43.742 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:08:43.742 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:08:43.742 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:43.742 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:43.742 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:08:43.742 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:43.742 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:43.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.742 --rc genhtml_branch_coverage=1 00:08:43.742 --rc genhtml_function_coverage=1 00:08:43.742 --rc genhtml_legend=1 00:08:43.742 --rc geninfo_all_blocks=1 00:08:43.742 --rc geninfo_unexecuted_blocks=1 00:08:43.742 00:08:43.742 ' 00:08:43.742 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:43.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.742 --rc genhtml_branch_coverage=1 00:08:43.742 --rc genhtml_function_coverage=1 00:08:43.742 --rc genhtml_legend=1 00:08:43.742 --rc geninfo_all_blocks=1 00:08:43.742 --rc geninfo_unexecuted_blocks=1 00:08:43.742 00:08:43.742 ' 00:08:43.742 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:43.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.742 --rc genhtml_branch_coverage=1 00:08:43.742 --rc genhtml_function_coverage=1 00:08:43.742 --rc genhtml_legend=1 00:08:43.742 --rc geninfo_all_blocks=1 00:08:43.742 --rc geninfo_unexecuted_blocks=1 00:08:43.742 00:08:43.742 ' 00:08:43.742 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:43.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.742 --rc genhtml_branch_coverage=1 00:08:43.742 --rc genhtml_function_coverage=1 00:08:43.742 --rc genhtml_legend=1 00:08:43.742 --rc geninfo_all_blocks=1 00:08:43.742 --rc geninfo_unexecuted_blocks=1 00:08:43.742 00:08:43.742 ' 00:08:43.742 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:43.742 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:08:43.742 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:43.742 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:43.742 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:43.742 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:43.742 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:43.742 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:43.742 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:43.742 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:43.742 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:43.742 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:43.742 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:08:43.742 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=91838eb1-5852-43eb-90b2-09876f360ab2 00:08:43.742 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:43.742 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:43.742 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:43.742 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:43.742 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:43.742 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:08:43.742 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:43.742 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:43.742 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:43.743 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:43.743 Cannot find device "nvmf_init_br" 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:43.743 Cannot find device "nvmf_init_br2" 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:43.743 Cannot find device "nvmf_tgt_br" 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:43.743 Cannot find device "nvmf_tgt_br2" 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:43.743 Cannot find device "nvmf_init_br" 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:43.743 Cannot find device "nvmf_init_br2" 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:43.743 Cannot find device "nvmf_tgt_br" 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:43.743 Cannot find device "nvmf_tgt_br2" 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:43.743 Cannot find device "nvmf_br" 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:43.743 Cannot find device "nvmf_init_if" 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:43.743 Cannot find device "nvmf_init_if2" 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:43.743 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:43.743 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:43.743 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:43.744 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:43.744 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:43.744 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:43.744 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:43.744 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:43.744 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:43.744 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:44.001 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:44.002 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:44.002 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:44.002 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:44.002 19:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:44.002 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:44.002 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:44.002 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:44.002 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:44.002 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:44.002 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:44.002 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:44.002 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:44.002 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:44.002 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:44.002 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:44.002 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:44.002 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:44.002 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:44.002 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:08:44.002 00:08:44.002 --- 10.0.0.3 ping statistics --- 00:08:44.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.002 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:08:44.002 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:44.002 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:44.002 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:08:44.002 00:08:44.002 --- 10.0.0.4 ping statistics --- 00:08:44.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.002 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:08:44.002 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:44.002 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:44.002 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:08:44.002 00:08:44.002 --- 10.0.0.1 ping statistics --- 00:08:44.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.002 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:08:44.002 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:44.002 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:44.002 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:08:44.002 00:08:44.002 --- 10.0.0.2 ping statistics --- 00:08:44.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.002 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:08:44.002 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:44.002 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:08:44.002 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:44.002 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:44.002 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:44.002 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:44.002 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:44.002 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:44.002 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:44.002 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:08:44.002 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:44.002 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:44.002 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:44.002 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=66166 00:08:44.002 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 66166 00:08:44.002 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 66166 ']' 00:08:44.002 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.002 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:08:44.002 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:44.002 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.002 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:44.002 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:44.932 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:44.932 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:08:44.932 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:44.932 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:44.932 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:44.932 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:44.932 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=66198 00:08:44.932 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:08:44.932 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d65da08dab53f19725f15ac59cf048e1c0e394adde4556e0 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.rUM 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d65da08dab53f19725f15ac59cf048e1c0e394adde4556e0 0 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d65da08dab53f19725f15ac59cf048e1c0e394adde4556e0 0 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d65da08dab53f19725f15ac59cf048e1c0e394adde4556e0 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.rUM 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.rUM 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.rUM 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c83eeafbf43c94e83e6fa46ddcce845ddf745e494bac4750eb3f0811d1f6a239 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.wcb 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c83eeafbf43c94e83e6fa46ddcce845ddf745e494bac4750eb3f0811d1f6a239 3 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c83eeafbf43c94e83e6fa46ddcce845ddf745e494bac4750eb3f0811d1f6a239 3 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c83eeafbf43c94e83e6fa46ddcce845ddf745e494bac4750eb3f0811d1f6a239 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.wcb 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.wcb 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.wcb 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4587d15009cb909341e61906c4fb313b 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.R6Q 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4587d15009cb909341e61906c4fb313b 1 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4587d15009cb909341e61906c4fb313b 1 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4587d15009cb909341e61906c4fb313b 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.R6Q 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.R6Q 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.R6Q 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:08:44.933 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:08:45.191 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6beb5b243a0ab5e145afc13a3c7bac13edf7a88b600f5643 00:08:45.191 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:08:45.191 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.c2k 00:08:45.191 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6beb5b243a0ab5e145afc13a3c7bac13edf7a88b600f5643 2 00:08:45.191 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6beb5b243a0ab5e145afc13a3c7bac13edf7a88b600f5643 2 00:08:45.191 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:08:45.191 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:08:45.191 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6beb5b243a0ab5e145afc13a3c7bac13edf7a88b600f5643 00:08:45.191 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:08:45.191 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:08:45.191 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.c2k 00:08:45.191 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.c2k 00:08:45.191 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.c2k 00:08:45.191 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:08:45.191 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:08:45.191 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:08:45.191 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:08:45.191 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:08:45.191 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:08:45.191 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:08:45.191 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=15b4f9fdba84bca77f665cf137eb291b396e8dfae1f7a7cc 00:08:45.191 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:08:45.191 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.6Q3 00:08:45.191 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 15b4f9fdba84bca77f665cf137eb291b396e8dfae1f7a7cc 2 00:08:45.191 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 15b4f9fdba84bca77f665cf137eb291b396e8dfae1f7a7cc 2 00:08:45.191 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:08:45.191 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:08:45.191 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=15b4f9fdba84bca77f665cf137eb291b396e8dfae1f7a7cc 00:08:45.191 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:08:45.191 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:08:45.191 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.6Q3 00:08:45.191 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.6Q3 00:08:45.191 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.6Q3 00:08:45.191 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:08:45.191 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:08:45.191 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:08:45.191 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:08:45.191 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:08:45.191 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:08:45.191 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:08:45.191 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8d88c1f508765b99298a8481bf95a0b9 00:08:45.191 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:08:45.192 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.buG 00:08:45.192 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8d88c1f508765b99298a8481bf95a0b9 1 00:08:45.192 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8d88c1f508765b99298a8481bf95a0b9 1 00:08:45.192 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:08:45.192 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:08:45.192 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8d88c1f508765b99298a8481bf95a0b9 00:08:45.192 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:08:45.192 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:08:45.192 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.buG 00:08:45.192 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.buG 00:08:45.192 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.buG 00:08:45.192 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:08:45.192 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:08:45.192 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:08:45.192 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:08:45.192 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:08:45.192 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:08:45.192 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:08:45.192 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f0a51898c9c520c7042140555e9aa62d296f5f21709fefe3136c7aff6ef04674 00:08:45.192 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:08:45.192 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.cVG 00:08:45.192 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f0a51898c9c520c7042140555e9aa62d296f5f21709fefe3136c7aff6ef04674 3 00:08:45.192 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f0a51898c9c520c7042140555e9aa62d296f5f21709fefe3136c7aff6ef04674 3 00:08:45.192 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:08:45.192 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:08:45.192 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f0a51898c9c520c7042140555e9aa62d296f5f21709fefe3136c7aff6ef04674 00:08:45.192 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:08:45.192 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:08:45.192 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.cVG 00:08:45.192 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.cVG 00:08:45.192 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.cVG 00:08:45.192 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:08:45.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.192 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 66166 00:08:45.192 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 66166 ']' 00:08:45.192 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.192 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:45.192 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.192 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:45.192 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:45.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:08:45.449 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:45.449 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:08:45.449 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 66198 /var/tmp/host.sock 00:08:45.449 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 66198 ']' 00:08:45.449 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:08:45.449 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:45.449 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:08:45.449 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:45.449 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:45.706 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:45.706 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:08:45.706 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:08:45.706 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.706 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:45.706 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.706 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:08:45.706 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.rUM 00:08:45.706 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.706 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:45.706 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.706 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.rUM 00:08:45.706 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.rUM 00:08:45.964 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.wcb ]] 00:08:45.964 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.wcb 00:08:45.964 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.964 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:45.964 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.964 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.wcb 00:08:45.964 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.wcb 00:08:45.964 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:08:45.964 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.R6Q 00:08:45.964 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.964 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:45.964 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.964 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.R6Q 00:08:45.964 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.R6Q 00:08:46.221 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.c2k ]] 00:08:46.221 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.c2k 00:08:46.221 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.221 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:46.221 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.221 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.c2k 00:08:46.221 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.c2k 00:08:46.478 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:08:46.478 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.6Q3 00:08:46.478 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.478 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:46.478 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.478 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.6Q3 00:08:46.478 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.6Q3 00:08:46.736 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.buG ]] 00:08:46.736 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.buG 00:08:46.736 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.736 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:46.736 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.736 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.buG 00:08:46.736 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.buG 00:08:46.993 19:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:08:46.993 19:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.cVG 00:08:46.993 19:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.993 19:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:46.993 19:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.993 19:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.cVG 00:08:46.993 19:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.cVG 00:08:47.249 19:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:08:47.249 19:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:08:47.249 19:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:08:47.249 19:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:08:47.249 19:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:08:47.249 19:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:08:47.506 19:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:08:47.506 19:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:08:47.506 19:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:08:47.506 19:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:08:47.506 19:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:08:47.506 19:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:08:47.506 19:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:08:47.506 19:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.506 19:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:47.506 19:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.506 19:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:08:47.506 19:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:08:47.506 19:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:08:47.763 00:08:47.763 19:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:08:47.763 19:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:08:47.763 19:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:08:48.021 19:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:08:48.021 19:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:08:48.021 19:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.021 19:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:48.021 19:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.021 19:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:08:48.021 { 00:08:48.021 "cntlid": 1, 00:08:48.021 "qid": 0, 00:08:48.021 "state": "enabled", 00:08:48.021 "thread": "nvmf_tgt_poll_group_000", 00:08:48.021 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:08:48.021 "listen_address": { 00:08:48.021 "trtype": "TCP", 00:08:48.021 "adrfam": "IPv4", 00:08:48.021 "traddr": "10.0.0.3", 00:08:48.021 "trsvcid": "4420" 00:08:48.021 }, 00:08:48.021 "peer_address": { 00:08:48.021 "trtype": "TCP", 00:08:48.021 "adrfam": "IPv4", 00:08:48.021 "traddr": "10.0.0.1", 00:08:48.021 "trsvcid": "50902" 00:08:48.021 }, 00:08:48.021 "auth": { 00:08:48.021 "state": "completed", 00:08:48.021 "digest": "sha256", 00:08:48.021 "dhgroup": "null" 00:08:48.021 } 00:08:48.021 } 00:08:48.021 ]' 00:08:48.021 19:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:08:48.021 19:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:08:48.021 19:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:08:48.021 19:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:08:48.021 19:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:08:48.021 19:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:08:48.021 19:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:08:48.021 19:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:08:48.278 19:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDY1ZGEwOGRhYjUzZjE5NzI1ZjE1YWM1OWNmMDQ4ZTFjMGUzOTRhZGRlNDU1NmUwwJmTAQ==: --dhchap-ctrl-secret DHHC-1:03:YzgzZWVhZmJmNDNjOTRlODNlNmZhNDZkZGNjZTg0NWRkZjc0NWU0OTRiYWM0NzUwZWIzZjA4MTFkMWY2YTIzOTavnEE=: 00:08:48.278 19:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:00:ZDY1ZGEwOGRhYjUzZjE5NzI1ZjE1YWM1OWNmMDQ4ZTFjMGUzOTRhZGRlNDU1NmUwwJmTAQ==: --dhchap-ctrl-secret DHHC-1:03:YzgzZWVhZmJmNDNjOTRlODNlNmZhNDZkZGNjZTg0NWRkZjc0NWU0OTRiYWM0NzUwZWIzZjA4MTFkMWY2YTIzOTavnEE=: 00:08:52.455 19:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:08:52.455 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:08:52.455 19:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:08:52.455 19:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.455 19:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:52.455 19:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.455 19:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:08:52.455 19:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:08:52.455 19:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:08:52.455 19:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:08:52.455 19:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:08:52.455 19:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:08:52.455 19:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:08:52.455 19:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:08:52.455 19:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:08:52.455 19:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:08:52.455 19:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.455 19:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:52.455 19:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.455 19:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:08:52.455 19:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:08:52.455 19:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:08:52.455 00:08:52.713 19:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:08:52.713 19:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:08:52.713 19:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:08:52.713 19:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:08:52.713 19:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:08:52.713 19:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.713 19:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:52.713 19:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.713 19:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:08:52.713 { 00:08:52.713 "cntlid": 3, 00:08:52.713 "qid": 0, 00:08:52.713 "state": "enabled", 00:08:52.713 "thread": "nvmf_tgt_poll_group_000", 00:08:52.713 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:08:52.713 "listen_address": { 00:08:52.713 "trtype": "TCP", 00:08:52.713 "adrfam": "IPv4", 00:08:52.713 "traddr": "10.0.0.3", 00:08:52.713 "trsvcid": "4420" 00:08:52.713 }, 00:08:52.713 "peer_address": { 00:08:52.713 "trtype": "TCP", 00:08:52.713 "adrfam": "IPv4", 00:08:52.713 "traddr": "10.0.0.1", 00:08:52.713 "trsvcid": "40076" 00:08:52.713 }, 00:08:52.713 "auth": { 00:08:52.713 "state": "completed", 00:08:52.713 "digest": "sha256", 00:08:52.713 "dhgroup": "null" 00:08:52.713 } 00:08:52.713 } 00:08:52.713 ]' 00:08:52.713 19:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:08:52.971 19:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:08:52.971 19:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:08:52.971 19:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:08:52.971 19:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:08:52.971 19:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:08:52.971 19:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:08:52.971 19:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:08:53.228 19:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDU4N2QxNTAwOWNiOTA5MzQxZTYxOTA2YzRmYjMxM2Jh56XD: --dhchap-ctrl-secret DHHC-1:02:NmJlYjViMjQzYTBhYjVlMTQ1YWZjMTNhM2M3YmFjMTNlZGY3YTg4YjYwMGY1NjQzMW7RBQ==: 00:08:53.228 19:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:01:NDU4N2QxNTAwOWNiOTA5MzQxZTYxOTA2YzRmYjMxM2Jh56XD: --dhchap-ctrl-secret DHHC-1:02:NmJlYjViMjQzYTBhYjVlMTQ1YWZjMTNhM2M3YmFjMTNlZGY3YTg4YjYwMGY1NjQzMW7RBQ==: 00:08:53.884 19:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:08:53.884 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:08:53.884 19:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:08:53.884 19:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.884 19:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:53.884 19:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.884 19:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:08:53.884 19:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:08:53.884 19:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:08:54.159 19:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:08:54.159 19:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:08:54.159 19:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:08:54.159 19:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:08:54.159 19:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:08:54.159 19:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:08:54.159 19:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:08:54.159 19:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.159 19:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:54.159 19:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.159 19:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:08:54.159 19:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:08:54.159 19:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:08:54.417 00:08:54.417 19:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:08:54.417 19:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:08:54.417 19:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:08:54.417 19:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:08:54.417 19:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:08:54.417 19:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.417 19:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:54.417 19:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.417 19:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:08:54.417 { 00:08:54.417 "cntlid": 5, 00:08:54.417 "qid": 0, 00:08:54.417 "state": "enabled", 00:08:54.417 "thread": "nvmf_tgt_poll_group_000", 00:08:54.417 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:08:54.417 "listen_address": { 00:08:54.417 "trtype": "TCP", 00:08:54.417 "adrfam": "IPv4", 00:08:54.417 "traddr": "10.0.0.3", 00:08:54.417 "trsvcid": "4420" 00:08:54.417 }, 00:08:54.417 "peer_address": { 00:08:54.417 "trtype": "TCP", 00:08:54.417 "adrfam": "IPv4", 00:08:54.417 "traddr": "10.0.0.1", 00:08:54.417 "trsvcid": "40104" 00:08:54.417 }, 00:08:54.417 "auth": { 00:08:54.418 "state": "completed", 00:08:54.418 "digest": "sha256", 00:08:54.418 "dhgroup": "null" 00:08:54.418 } 00:08:54.418 } 00:08:54.418 ]' 00:08:54.418 19:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:08:54.676 19:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:08:54.676 19:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:08:54.676 19:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:08:54.676 19:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:08:54.676 19:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:08:54.676 19:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:08:54.676 19:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:08:54.676 19:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTViNGY5ZmRiYTg0YmNhNzdmNjY1Y2YxMzdlYjI5MWIzOTZlOGRmYWUxZjdhN2NjEiwkGw==: --dhchap-ctrl-secret DHHC-1:01:OGQ4OGMxZjUwODc2NWI5OTI5OGE4NDgxYmY5NWEwYjk6tdyP: 00:08:54.676 19:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:02:MTViNGY5ZmRiYTg0YmNhNzdmNjY1Y2YxMzdlYjI5MWIzOTZlOGRmYWUxZjdhN2NjEiwkGw==: --dhchap-ctrl-secret DHHC-1:01:OGQ4OGMxZjUwODc2NWI5OTI5OGE4NDgxYmY5NWEwYjk6tdyP: 00:08:55.607 19:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:08:55.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:08:55.607 19:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:08:55.608 19:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.608 19:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:55.608 19:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.608 19:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:08:55.608 19:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:08:55.608 19:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:08:55.608 19:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:08:55.608 19:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:08:55.608 19:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:08:55.608 19:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:08:55.608 19:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:08:55.608 19:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:08:55.608 19:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key3 00:08:55.608 19:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.608 19:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:55.608 19:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.608 19:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:08:55.608 19:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:08:55.608 19:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:08:55.865 00:08:55.865 19:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:08:55.865 19:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:08:55.866 19:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:08:56.123 19:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:08:56.123 19:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:08:56.123 19:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.123 19:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:56.123 19:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.123 19:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:08:56.123 { 00:08:56.123 "cntlid": 7, 00:08:56.123 "qid": 0, 00:08:56.123 "state": "enabled", 00:08:56.123 "thread": "nvmf_tgt_poll_group_000", 00:08:56.123 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:08:56.123 "listen_address": { 00:08:56.123 "trtype": "TCP", 00:08:56.123 "adrfam": "IPv4", 00:08:56.123 "traddr": "10.0.0.3", 00:08:56.123 "trsvcid": "4420" 00:08:56.123 }, 00:08:56.123 "peer_address": { 00:08:56.123 "trtype": "TCP", 00:08:56.123 "adrfam": "IPv4", 00:08:56.123 "traddr": "10.0.0.1", 00:08:56.123 "trsvcid": "40126" 00:08:56.123 }, 00:08:56.123 "auth": { 00:08:56.123 "state": "completed", 00:08:56.123 "digest": "sha256", 00:08:56.123 "dhgroup": "null" 00:08:56.123 } 00:08:56.123 } 00:08:56.123 ]' 00:08:56.123 19:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:08:56.123 19:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:08:56.123 19:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:08:56.123 19:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:08:56.123 19:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:08:56.123 19:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:08:56.123 19:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:08:56.123 19:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:08:56.380 19:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjBhNTE4OThjOWM1MjBjNzA0MjE0MDU1NWU5YWE2MmQyOTZmNWYyMTcwOWZlZmUzMTM2YzdhZmY2ZWYwNDY3NMoRYKs=: 00:08:56.380 19:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:03:ZjBhNTE4OThjOWM1MjBjNzA0MjE0MDU1NWU5YWE2MmQyOTZmNWYyMTcwOWZlZmUzMTM2YzdhZmY2ZWYwNDY3NMoRYKs=: 00:08:56.944 19:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:08:56.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:08:56.944 19:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:08:56.944 19:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.944 19:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:56.944 19:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.944 19:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:08:56.944 19:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:08:56.944 19:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:08:56.944 19:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:08:57.202 19:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:08:57.202 19:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:08:57.202 19:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:08:57.202 19:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:08:57.202 19:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:08:57.202 19:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:08:57.202 19:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:08:57.202 19:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.202 19:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:57.202 19:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.202 19:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:08:57.202 19:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:08:57.202 19:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:08:57.459 00:08:57.459 19:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:08:57.459 19:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:08:57.459 19:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:08:57.717 19:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:08:57.717 19:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:08:57.717 19:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.717 19:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:57.717 19:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.717 19:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:08:57.717 { 00:08:57.717 "cntlid": 9, 00:08:57.717 "qid": 0, 00:08:57.717 "state": "enabled", 00:08:57.717 "thread": "nvmf_tgt_poll_group_000", 00:08:57.717 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:08:57.717 "listen_address": { 00:08:57.717 "trtype": "TCP", 00:08:57.717 "adrfam": "IPv4", 00:08:57.717 "traddr": "10.0.0.3", 00:08:57.717 "trsvcid": "4420" 00:08:57.717 }, 00:08:57.717 "peer_address": { 00:08:57.717 "trtype": "TCP", 00:08:57.717 "adrfam": "IPv4", 00:08:57.717 "traddr": "10.0.0.1", 00:08:57.717 "trsvcid": "40146" 00:08:57.717 }, 00:08:57.717 "auth": { 00:08:57.717 "state": "completed", 00:08:57.717 "digest": "sha256", 00:08:57.717 "dhgroup": "ffdhe2048" 00:08:57.717 } 00:08:57.717 } 00:08:57.717 ]' 00:08:57.717 19:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:08:57.717 19:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:08:57.717 19:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:08:57.717 19:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:08:57.717 19:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:08:57.717 19:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:08:57.717 19:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:08:57.717 19:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:08:57.974 19:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDY1ZGEwOGRhYjUzZjE5NzI1ZjE1YWM1OWNmMDQ4ZTFjMGUzOTRhZGRlNDU1NmUwwJmTAQ==: --dhchap-ctrl-secret DHHC-1:03:YzgzZWVhZmJmNDNjOTRlODNlNmZhNDZkZGNjZTg0NWRkZjc0NWU0OTRiYWM0NzUwZWIzZjA4MTFkMWY2YTIzOTavnEE=: 00:08:57.974 19:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:00:ZDY1ZGEwOGRhYjUzZjE5NzI1ZjE1YWM1OWNmMDQ4ZTFjMGUzOTRhZGRlNDU1NmUwwJmTAQ==: --dhchap-ctrl-secret DHHC-1:03:YzgzZWVhZmJmNDNjOTRlODNlNmZhNDZkZGNjZTg0NWRkZjc0NWU0OTRiYWM0NzUwZWIzZjA4MTFkMWY2YTIzOTavnEE=: 00:08:58.539 19:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:08:58.539 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:08:58.539 19:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:08:58.539 19:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.539 19:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:58.539 19:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.539 19:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:08:58.539 19:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:08:58.539 19:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:08:58.539 19:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:08:58.539 19:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:08:58.539 19:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:08:58.539 19:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:08:58.539 19:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:08:58.539 19:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:08:58.539 19:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:08:58.539 19:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.539 19:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:58.796 19:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.796 19:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:08:58.796 19:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:08:58.796 19:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:08:59.054 00:08:59.054 19:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:08:59.054 19:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:08:59.054 19:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:08:59.054 19:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:08:59.054 19:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:08:59.054 19:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.054 19:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:08:59.054 19:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.054 19:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:08:59.054 { 00:08:59.054 "cntlid": 11, 00:08:59.054 "qid": 0, 00:08:59.054 "state": "enabled", 00:08:59.054 "thread": "nvmf_tgt_poll_group_000", 00:08:59.054 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:08:59.054 "listen_address": { 00:08:59.054 "trtype": "TCP", 00:08:59.054 "adrfam": "IPv4", 00:08:59.054 "traddr": "10.0.0.3", 00:08:59.054 "trsvcid": "4420" 00:08:59.054 }, 00:08:59.054 "peer_address": { 00:08:59.054 "trtype": "TCP", 00:08:59.054 "adrfam": "IPv4", 00:08:59.054 "traddr": "10.0.0.1", 00:08:59.054 "trsvcid": "40176" 00:08:59.054 }, 00:08:59.054 "auth": { 00:08:59.054 "state": "completed", 00:08:59.054 "digest": "sha256", 00:08:59.054 "dhgroup": "ffdhe2048" 00:08:59.055 } 00:08:59.055 } 00:08:59.055 ]' 00:08:59.055 19:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:08:59.313 19:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:08:59.313 19:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:08:59.313 19:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:08:59.313 19:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:08:59.313 19:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:08:59.313 19:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:08:59.313 19:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:08:59.570 19:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDU4N2QxNTAwOWNiOTA5MzQxZTYxOTA2YzRmYjMxM2Jh56XD: --dhchap-ctrl-secret DHHC-1:02:NmJlYjViMjQzYTBhYjVlMTQ1YWZjMTNhM2M3YmFjMTNlZGY3YTg4YjYwMGY1NjQzMW7RBQ==: 00:08:59.570 19:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:01:NDU4N2QxNTAwOWNiOTA5MzQxZTYxOTA2YzRmYjMxM2Jh56XD: --dhchap-ctrl-secret DHHC-1:02:NmJlYjViMjQzYTBhYjVlMTQ1YWZjMTNhM2M3YmFjMTNlZGY3YTg4YjYwMGY1NjQzMW7RBQ==: 00:09:00.134 19:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:00.134 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:00.134 19:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:09:00.134 19:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.134 19:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:00.134 19:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.134 19:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:00.134 19:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:00.134 19:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:00.134 19:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:09:00.134 19:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:00.134 19:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:00.134 19:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:00.134 19:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:00.134 19:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:00.134 19:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:00.135 19:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.135 19:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:00.135 19:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.135 19:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:00.135 19:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:00.135 19:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:00.391 00:09:00.648 19:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:00.648 19:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:00.648 19:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:00.648 19:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:00.648 19:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:00.648 19:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.648 19:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:00.648 19:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.648 19:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:00.648 { 00:09:00.648 "cntlid": 13, 00:09:00.648 "qid": 0, 00:09:00.648 "state": "enabled", 00:09:00.648 "thread": "nvmf_tgt_poll_group_000", 00:09:00.648 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:09:00.648 "listen_address": { 00:09:00.648 "trtype": "TCP", 00:09:00.648 "adrfam": "IPv4", 00:09:00.648 "traddr": "10.0.0.3", 00:09:00.648 "trsvcid": "4420" 00:09:00.648 }, 00:09:00.648 "peer_address": { 00:09:00.648 "trtype": "TCP", 00:09:00.648 "adrfam": "IPv4", 00:09:00.648 "traddr": "10.0.0.1", 00:09:00.648 "trsvcid": "40212" 00:09:00.648 }, 00:09:00.648 "auth": { 00:09:00.648 "state": "completed", 00:09:00.648 "digest": "sha256", 00:09:00.648 "dhgroup": "ffdhe2048" 00:09:00.648 } 00:09:00.648 } 00:09:00.648 ]' 00:09:00.648 19:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:00.648 19:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:00.648 19:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:00.905 19:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:00.905 19:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:00.905 19:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:00.905 19:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:00.905 19:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:00.905 19:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTViNGY5ZmRiYTg0YmNhNzdmNjY1Y2YxMzdlYjI5MWIzOTZlOGRmYWUxZjdhN2NjEiwkGw==: --dhchap-ctrl-secret DHHC-1:01:OGQ4OGMxZjUwODc2NWI5OTI5OGE4NDgxYmY5NWEwYjk6tdyP: 00:09:00.905 19:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:02:MTViNGY5ZmRiYTg0YmNhNzdmNjY1Y2YxMzdlYjI5MWIzOTZlOGRmYWUxZjdhN2NjEiwkGw==: --dhchap-ctrl-secret DHHC-1:01:OGQ4OGMxZjUwODc2NWI5OTI5OGE4NDgxYmY5NWEwYjk6tdyP: 00:09:01.841 19:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:01.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:01.841 19:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:09:01.841 19:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.841 19:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:01.841 19:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.841 19:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:01.841 19:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:01.841 19:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:01.841 19:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:09:01.841 19:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:01.841 19:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:01.841 19:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:01.841 19:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:01.841 19:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:01.841 19:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key3 00:09:01.841 19:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.841 19:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:01.841 19:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.841 19:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:01.841 19:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:01.841 19:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:02.098 00:09:02.098 19:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:02.098 19:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:02.098 19:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:02.356 19:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:02.356 19:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:02.356 19:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.356 19:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:02.356 19:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.356 19:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:02.356 { 00:09:02.356 "cntlid": 15, 00:09:02.356 "qid": 0, 00:09:02.356 "state": "enabled", 00:09:02.356 "thread": "nvmf_tgt_poll_group_000", 00:09:02.356 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:09:02.356 "listen_address": { 00:09:02.356 "trtype": "TCP", 00:09:02.356 "adrfam": "IPv4", 00:09:02.356 "traddr": "10.0.0.3", 00:09:02.356 "trsvcid": "4420" 00:09:02.356 }, 00:09:02.356 "peer_address": { 00:09:02.356 "trtype": "TCP", 00:09:02.356 "adrfam": "IPv4", 00:09:02.356 "traddr": "10.0.0.1", 00:09:02.356 "trsvcid": "50898" 00:09:02.356 }, 00:09:02.356 "auth": { 00:09:02.356 "state": "completed", 00:09:02.356 "digest": "sha256", 00:09:02.356 "dhgroup": "ffdhe2048" 00:09:02.356 } 00:09:02.356 } 00:09:02.356 ]' 00:09:02.356 19:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:02.356 19:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:02.356 19:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:02.356 19:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:02.356 19:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:02.613 19:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:02.613 19:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:02.614 19:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:02.614 19:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjBhNTE4OThjOWM1MjBjNzA0MjE0MDU1NWU5YWE2MmQyOTZmNWYyMTcwOWZlZmUzMTM2YzdhZmY2ZWYwNDY3NMoRYKs=: 00:09:02.614 19:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:03:ZjBhNTE4OThjOWM1MjBjNzA0MjE0MDU1NWU5YWE2MmQyOTZmNWYyMTcwOWZlZmUzMTM2YzdhZmY2ZWYwNDY3NMoRYKs=: 00:09:03.555 19:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:03.555 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:03.555 19:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:09:03.555 19:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.555 19:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:03.555 19:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.555 19:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:03.555 19:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:03.555 19:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:03.555 19:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:03.812 19:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:09:03.812 19:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:03.812 19:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:03.812 19:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:09:03.812 19:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:03.812 19:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:03.812 19:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:03.812 19:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.812 19:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:03.812 19:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.812 19:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:03.812 19:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:03.812 19:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:04.074 00:09:04.074 19:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:04.074 19:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:04.074 19:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:04.074 19:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:04.074 19:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:04.074 19:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.074 19:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:04.333 19:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.333 19:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:04.333 { 00:09:04.333 "cntlid": 17, 00:09:04.333 "qid": 0, 00:09:04.333 "state": "enabled", 00:09:04.333 "thread": "nvmf_tgt_poll_group_000", 00:09:04.333 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:09:04.333 "listen_address": { 00:09:04.333 "trtype": "TCP", 00:09:04.333 "adrfam": "IPv4", 00:09:04.333 "traddr": "10.0.0.3", 00:09:04.333 "trsvcid": "4420" 00:09:04.333 }, 00:09:04.333 "peer_address": { 00:09:04.333 "trtype": "TCP", 00:09:04.333 "adrfam": "IPv4", 00:09:04.333 "traddr": "10.0.0.1", 00:09:04.333 "trsvcid": "50934" 00:09:04.333 }, 00:09:04.333 "auth": { 00:09:04.333 "state": "completed", 00:09:04.333 "digest": "sha256", 00:09:04.333 "dhgroup": "ffdhe3072" 00:09:04.333 } 00:09:04.333 } 00:09:04.333 ]' 00:09:04.333 19:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:04.333 19:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:04.333 19:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:04.333 19:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:09:04.333 19:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:04.333 19:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:04.333 19:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:04.333 19:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:04.589 19:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDY1ZGEwOGRhYjUzZjE5NzI1ZjE1YWM1OWNmMDQ4ZTFjMGUzOTRhZGRlNDU1NmUwwJmTAQ==: --dhchap-ctrl-secret DHHC-1:03:YzgzZWVhZmJmNDNjOTRlODNlNmZhNDZkZGNjZTg0NWRkZjc0NWU0OTRiYWM0NzUwZWIzZjA4MTFkMWY2YTIzOTavnEE=: 00:09:04.589 19:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:00:ZDY1ZGEwOGRhYjUzZjE5NzI1ZjE1YWM1OWNmMDQ4ZTFjMGUzOTRhZGRlNDU1NmUwwJmTAQ==: --dhchap-ctrl-secret DHHC-1:03:YzgzZWVhZmJmNDNjOTRlODNlNmZhNDZkZGNjZTg0NWRkZjc0NWU0OTRiYWM0NzUwZWIzZjA4MTFkMWY2YTIzOTavnEE=: 00:09:05.152 19:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:05.152 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:05.152 19:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:09:05.152 19:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.152 19:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:05.152 19:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.152 19:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:05.152 19:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:05.152 19:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:05.409 19:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:09:05.409 19:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:05.409 19:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:05.409 19:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:09:05.409 19:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:05.409 19:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:05.409 19:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:05.409 19:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.409 19:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:05.409 19:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.409 19:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:05.409 19:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:05.409 19:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:05.666 00:09:05.667 19:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:05.667 19:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:05.667 19:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:05.924 19:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:05.924 19:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:05.924 19:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.924 19:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:05.924 19:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.924 19:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:05.924 { 00:09:05.924 "cntlid": 19, 00:09:05.924 "qid": 0, 00:09:05.924 "state": "enabled", 00:09:05.924 "thread": "nvmf_tgt_poll_group_000", 00:09:05.924 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:09:05.924 "listen_address": { 00:09:05.924 "trtype": "TCP", 00:09:05.924 "adrfam": "IPv4", 00:09:05.924 "traddr": "10.0.0.3", 00:09:05.924 "trsvcid": "4420" 00:09:05.924 }, 00:09:05.924 "peer_address": { 00:09:05.924 "trtype": "TCP", 00:09:05.924 "adrfam": "IPv4", 00:09:05.924 "traddr": "10.0.0.1", 00:09:05.924 "trsvcid": "50968" 00:09:05.924 }, 00:09:05.924 "auth": { 00:09:05.924 "state": "completed", 00:09:05.924 "digest": "sha256", 00:09:05.924 "dhgroup": "ffdhe3072" 00:09:05.924 } 00:09:05.924 } 00:09:05.924 ]' 00:09:05.924 19:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:05.924 19:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:05.924 19:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:05.924 19:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:09:05.924 19:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:05.924 19:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:05.924 19:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:05.924 19:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:06.181 19:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDU4N2QxNTAwOWNiOTA5MzQxZTYxOTA2YzRmYjMxM2Jh56XD: --dhchap-ctrl-secret DHHC-1:02:NmJlYjViMjQzYTBhYjVlMTQ1YWZjMTNhM2M3YmFjMTNlZGY3YTg4YjYwMGY1NjQzMW7RBQ==: 00:09:06.181 19:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:01:NDU4N2QxNTAwOWNiOTA5MzQxZTYxOTA2YzRmYjMxM2Jh56XD: --dhchap-ctrl-secret DHHC-1:02:NmJlYjViMjQzYTBhYjVlMTQ1YWZjMTNhM2M3YmFjMTNlZGY3YTg4YjYwMGY1NjQzMW7RBQ==: 00:09:06.746 19:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:06.746 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:06.746 19:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:09:06.746 19:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.746 19:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:06.746 19:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.746 19:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:06.746 19:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:06.746 19:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:07.003 19:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:09:07.003 19:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:07.003 19:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:07.003 19:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:09:07.003 19:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:07.003 19:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:07.003 19:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:07.003 19:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.003 19:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:07.003 19:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.003 19:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:07.003 19:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:07.003 19:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:07.261 00:09:07.261 19:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:07.261 19:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:07.261 19:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:07.519 19:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:07.519 19:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:07.519 19:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.519 19:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:07.519 19:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.519 19:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:07.519 { 00:09:07.519 "cntlid": 21, 00:09:07.519 "qid": 0, 00:09:07.519 "state": "enabled", 00:09:07.519 "thread": "nvmf_tgt_poll_group_000", 00:09:07.519 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:09:07.519 "listen_address": { 00:09:07.519 "trtype": "TCP", 00:09:07.519 "adrfam": "IPv4", 00:09:07.519 "traddr": "10.0.0.3", 00:09:07.519 "trsvcid": "4420" 00:09:07.519 }, 00:09:07.519 "peer_address": { 00:09:07.519 "trtype": "TCP", 00:09:07.519 "adrfam": "IPv4", 00:09:07.519 "traddr": "10.0.0.1", 00:09:07.519 "trsvcid": "50988" 00:09:07.519 }, 00:09:07.519 "auth": { 00:09:07.519 "state": "completed", 00:09:07.519 "digest": "sha256", 00:09:07.519 "dhgroup": "ffdhe3072" 00:09:07.519 } 00:09:07.519 } 00:09:07.519 ]' 00:09:07.519 19:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:07.519 19:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:07.519 19:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:07.519 19:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:09:07.519 19:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:07.519 19:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:07.519 19:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:07.519 19:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:07.777 19:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTViNGY5ZmRiYTg0YmNhNzdmNjY1Y2YxMzdlYjI5MWIzOTZlOGRmYWUxZjdhN2NjEiwkGw==: --dhchap-ctrl-secret DHHC-1:01:OGQ4OGMxZjUwODc2NWI5OTI5OGE4NDgxYmY5NWEwYjk6tdyP: 00:09:07.777 19:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:02:MTViNGY5ZmRiYTg0YmNhNzdmNjY1Y2YxMzdlYjI5MWIzOTZlOGRmYWUxZjdhN2NjEiwkGw==: --dhchap-ctrl-secret DHHC-1:01:OGQ4OGMxZjUwODc2NWI5OTI5OGE4NDgxYmY5NWEwYjk6tdyP: 00:09:08.343 19:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:08.343 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:08.343 19:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:09:08.343 19:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.343 19:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:08.343 19:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.343 19:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:08.343 19:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:08.343 19:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:08.600 19:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:09:08.600 19:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:08.600 19:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:08.600 19:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:09:08.600 19:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:08.600 19:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:08.600 19:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key3 00:09:08.600 19:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.600 19:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:08.600 19:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.600 19:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:08.600 19:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:08.600 19:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:08.858 00:09:08.858 19:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:08.858 19:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:08.858 19:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:09.115 19:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:09.115 19:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:09.116 19:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.116 19:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:09.116 19:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.116 19:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:09.116 { 00:09:09.116 "cntlid": 23, 00:09:09.116 "qid": 0, 00:09:09.116 "state": "enabled", 00:09:09.116 "thread": "nvmf_tgt_poll_group_000", 00:09:09.116 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:09:09.116 "listen_address": { 00:09:09.116 "trtype": "TCP", 00:09:09.116 "adrfam": "IPv4", 00:09:09.116 "traddr": "10.0.0.3", 00:09:09.116 "trsvcid": "4420" 00:09:09.116 }, 00:09:09.116 "peer_address": { 00:09:09.116 "trtype": "TCP", 00:09:09.116 "adrfam": "IPv4", 00:09:09.116 "traddr": "10.0.0.1", 00:09:09.116 "trsvcid": "51006" 00:09:09.116 }, 00:09:09.116 "auth": { 00:09:09.116 "state": "completed", 00:09:09.116 "digest": "sha256", 00:09:09.116 "dhgroup": "ffdhe3072" 00:09:09.116 } 00:09:09.116 } 00:09:09.116 ]' 00:09:09.116 19:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:09.116 19:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:09.116 19:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:09.116 19:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:09:09.116 19:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:09.116 19:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:09.116 19:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:09.116 19:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:09.373 19:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjBhNTE4OThjOWM1MjBjNzA0MjE0MDU1NWU5YWE2MmQyOTZmNWYyMTcwOWZlZmUzMTM2YzdhZmY2ZWYwNDY3NMoRYKs=: 00:09:09.373 19:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:03:ZjBhNTE4OThjOWM1MjBjNzA0MjE0MDU1NWU5YWE2MmQyOTZmNWYyMTcwOWZlZmUzMTM2YzdhZmY2ZWYwNDY3NMoRYKs=: 00:09:09.939 19:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:09.940 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:09.940 19:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:09:09.940 19:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.940 19:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:09.940 19:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.940 19:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:09.940 19:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:09.940 19:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:09.940 19:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:10.198 19:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:09:10.198 19:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:10.198 19:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:10.198 19:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:09:10.198 19:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:10.198 19:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:10.198 19:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:10.198 19:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.198 19:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:10.198 19:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.198 19:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:10.198 19:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:10.198 19:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:10.456 00:09:10.456 19:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:10.456 19:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:10.456 19:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:10.713 19:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:10.713 19:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:10.713 19:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.713 19:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:10.713 19:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.713 19:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:10.713 { 00:09:10.713 "cntlid": 25, 00:09:10.713 "qid": 0, 00:09:10.713 "state": "enabled", 00:09:10.713 "thread": "nvmf_tgt_poll_group_000", 00:09:10.713 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:09:10.713 "listen_address": { 00:09:10.713 "trtype": "TCP", 00:09:10.713 "adrfam": "IPv4", 00:09:10.713 "traddr": "10.0.0.3", 00:09:10.713 "trsvcid": "4420" 00:09:10.713 }, 00:09:10.713 "peer_address": { 00:09:10.713 "trtype": "TCP", 00:09:10.713 "adrfam": "IPv4", 00:09:10.713 "traddr": "10.0.0.1", 00:09:10.713 "trsvcid": "51034" 00:09:10.713 }, 00:09:10.714 "auth": { 00:09:10.714 "state": "completed", 00:09:10.714 "digest": "sha256", 00:09:10.714 "dhgroup": "ffdhe4096" 00:09:10.714 } 00:09:10.714 } 00:09:10.714 ]' 00:09:10.714 19:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:10.714 19:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:10.714 19:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:10.714 19:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:09:10.714 19:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:10.714 19:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:10.714 19:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:10.714 19:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:10.971 19:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDY1ZGEwOGRhYjUzZjE5NzI1ZjE1YWM1OWNmMDQ4ZTFjMGUzOTRhZGRlNDU1NmUwwJmTAQ==: --dhchap-ctrl-secret DHHC-1:03:YzgzZWVhZmJmNDNjOTRlODNlNmZhNDZkZGNjZTg0NWRkZjc0NWU0OTRiYWM0NzUwZWIzZjA4MTFkMWY2YTIzOTavnEE=: 00:09:10.971 19:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:00:ZDY1ZGEwOGRhYjUzZjE5NzI1ZjE1YWM1OWNmMDQ4ZTFjMGUzOTRhZGRlNDU1NmUwwJmTAQ==: --dhchap-ctrl-secret DHHC-1:03:YzgzZWVhZmJmNDNjOTRlODNlNmZhNDZkZGNjZTg0NWRkZjc0NWU0OTRiYWM0NzUwZWIzZjA4MTFkMWY2YTIzOTavnEE=: 00:09:11.535 19:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:11.535 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:11.535 19:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:09:11.535 19:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.535 19:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:11.535 19:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.535 19:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:11.535 19:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:11.535 19:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:11.792 19:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:09:11.792 19:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:11.792 19:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:11.792 19:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:09:11.792 19:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:11.792 19:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:11.792 19:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:11.792 19:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.792 19:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:11.792 19:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.792 19:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:11.792 19:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:11.792 19:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:12.143 00:09:12.143 19:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:12.143 19:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:12.143 19:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:12.401 19:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:12.401 19:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:12.401 19:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.401 19:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:12.401 19:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.401 19:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:12.401 { 00:09:12.401 "cntlid": 27, 00:09:12.401 "qid": 0, 00:09:12.401 "state": "enabled", 00:09:12.401 "thread": "nvmf_tgt_poll_group_000", 00:09:12.401 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:09:12.401 "listen_address": { 00:09:12.401 "trtype": "TCP", 00:09:12.401 "adrfam": "IPv4", 00:09:12.401 "traddr": "10.0.0.3", 00:09:12.401 "trsvcid": "4420" 00:09:12.401 }, 00:09:12.401 "peer_address": { 00:09:12.401 "trtype": "TCP", 00:09:12.401 "adrfam": "IPv4", 00:09:12.401 "traddr": "10.0.0.1", 00:09:12.401 "trsvcid": "51060" 00:09:12.401 }, 00:09:12.401 "auth": { 00:09:12.401 "state": "completed", 00:09:12.401 "digest": "sha256", 00:09:12.401 "dhgroup": "ffdhe4096" 00:09:12.401 } 00:09:12.401 } 00:09:12.401 ]' 00:09:12.401 19:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:12.401 19:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:12.401 19:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:12.401 19:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:09:12.401 19:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:12.401 19:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:12.401 19:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:12.401 19:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:12.658 19:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDU4N2QxNTAwOWNiOTA5MzQxZTYxOTA2YzRmYjMxM2Jh56XD: --dhchap-ctrl-secret DHHC-1:02:NmJlYjViMjQzYTBhYjVlMTQ1YWZjMTNhM2M3YmFjMTNlZGY3YTg4YjYwMGY1NjQzMW7RBQ==: 00:09:12.658 19:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:01:NDU4N2QxNTAwOWNiOTA5MzQxZTYxOTA2YzRmYjMxM2Jh56XD: --dhchap-ctrl-secret DHHC-1:02:NmJlYjViMjQzYTBhYjVlMTQ1YWZjMTNhM2M3YmFjMTNlZGY3YTg4YjYwMGY1NjQzMW7RBQ==: 00:09:13.223 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:13.223 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:13.224 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:09:13.224 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.224 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:13.224 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.224 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:13.224 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:13.224 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:13.480 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:09:13.480 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:13.480 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:13.480 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:09:13.480 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:13.480 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:13.481 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:13.481 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.481 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:13.481 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.481 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:13.481 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:13.481 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:13.739 00:09:13.739 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:13.739 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:13.739 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:13.996 19:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:13.996 19:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:13.996 19:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.996 19:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:13.996 19:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.996 19:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:13.996 { 00:09:13.996 "cntlid": 29, 00:09:13.996 "qid": 0, 00:09:13.996 "state": "enabled", 00:09:13.996 "thread": "nvmf_tgt_poll_group_000", 00:09:13.996 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:09:13.996 "listen_address": { 00:09:13.996 "trtype": "TCP", 00:09:13.996 "adrfam": "IPv4", 00:09:13.996 "traddr": "10.0.0.3", 00:09:13.996 "trsvcid": "4420" 00:09:13.996 }, 00:09:13.996 "peer_address": { 00:09:13.996 "trtype": "TCP", 00:09:13.996 "adrfam": "IPv4", 00:09:13.996 "traddr": "10.0.0.1", 00:09:13.996 "trsvcid": "35214" 00:09:13.996 }, 00:09:13.996 "auth": { 00:09:13.996 "state": "completed", 00:09:13.996 "digest": "sha256", 00:09:13.996 "dhgroup": "ffdhe4096" 00:09:13.996 } 00:09:13.996 } 00:09:13.996 ]' 00:09:13.996 19:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:13.996 19:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:13.996 19:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:13.996 19:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:09:13.996 19:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:13.996 19:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:13.996 19:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:13.996 19:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:14.252 19:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTViNGY5ZmRiYTg0YmNhNzdmNjY1Y2YxMzdlYjI5MWIzOTZlOGRmYWUxZjdhN2NjEiwkGw==: --dhchap-ctrl-secret DHHC-1:01:OGQ4OGMxZjUwODc2NWI5OTI5OGE4NDgxYmY5NWEwYjk6tdyP: 00:09:14.252 19:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:02:MTViNGY5ZmRiYTg0YmNhNzdmNjY1Y2YxMzdlYjI5MWIzOTZlOGRmYWUxZjdhN2NjEiwkGw==: --dhchap-ctrl-secret DHHC-1:01:OGQ4OGMxZjUwODc2NWI5OTI5OGE4NDgxYmY5NWEwYjk6tdyP: 00:09:14.816 19:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:14.816 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:14.816 19:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:09:14.816 19:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.816 19:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:14.816 19:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.816 19:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:14.816 19:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:14.816 19:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:15.073 19:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:09:15.073 19:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:15.073 19:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:15.073 19:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:09:15.073 19:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:15.073 19:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:15.073 19:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key3 00:09:15.073 19:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.073 19:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:15.073 19:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.073 19:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:15.073 19:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:15.073 19:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:15.330 00:09:15.330 19:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:15.330 19:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:15.330 19:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:15.587 19:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:15.587 19:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:15.587 19:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.587 19:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:15.587 19:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.587 19:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:15.587 { 00:09:15.587 "cntlid": 31, 00:09:15.587 "qid": 0, 00:09:15.587 "state": "enabled", 00:09:15.587 "thread": "nvmf_tgt_poll_group_000", 00:09:15.587 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:09:15.587 "listen_address": { 00:09:15.587 "trtype": "TCP", 00:09:15.587 "adrfam": "IPv4", 00:09:15.587 "traddr": "10.0.0.3", 00:09:15.587 "trsvcid": "4420" 00:09:15.587 }, 00:09:15.587 "peer_address": { 00:09:15.587 "trtype": "TCP", 00:09:15.587 "adrfam": "IPv4", 00:09:15.587 "traddr": "10.0.0.1", 00:09:15.587 "trsvcid": "35228" 00:09:15.587 }, 00:09:15.587 "auth": { 00:09:15.587 "state": "completed", 00:09:15.587 "digest": "sha256", 00:09:15.587 "dhgroup": "ffdhe4096" 00:09:15.587 } 00:09:15.587 } 00:09:15.587 ]' 00:09:15.587 19:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:15.587 19:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:15.587 19:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:15.587 19:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:09:15.587 19:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:15.587 19:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:15.587 19:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:15.587 19:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:15.843 19:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjBhNTE4OThjOWM1MjBjNzA0MjE0MDU1NWU5YWE2MmQyOTZmNWYyMTcwOWZlZmUzMTM2YzdhZmY2ZWYwNDY3NMoRYKs=: 00:09:15.843 19:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:03:ZjBhNTE4OThjOWM1MjBjNzA0MjE0MDU1NWU5YWE2MmQyOTZmNWYyMTcwOWZlZmUzMTM2YzdhZmY2ZWYwNDY3NMoRYKs=: 00:09:16.407 19:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:16.407 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:16.407 19:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:09:16.407 19:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.407 19:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:16.407 19:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.407 19:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:16.407 19:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:16.407 19:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:09:16.407 19:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:09:16.664 19:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:09:16.664 19:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:16.664 19:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:16.664 19:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:09:16.664 19:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:16.664 19:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:16.664 19:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:16.664 19:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.664 19:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:16.664 19:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.664 19:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:16.664 19:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:16.664 19:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:17.229 00:09:17.229 19:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:17.229 19:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:17.229 19:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:17.229 19:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:17.229 19:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:17.229 19:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.229 19:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:17.229 19:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.229 19:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:17.229 { 00:09:17.229 "cntlid": 33, 00:09:17.229 "qid": 0, 00:09:17.229 "state": "enabled", 00:09:17.229 "thread": "nvmf_tgt_poll_group_000", 00:09:17.229 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:09:17.229 "listen_address": { 00:09:17.229 "trtype": "TCP", 00:09:17.229 "adrfam": "IPv4", 00:09:17.229 "traddr": "10.0.0.3", 00:09:17.229 "trsvcid": "4420" 00:09:17.229 }, 00:09:17.229 "peer_address": { 00:09:17.229 "trtype": "TCP", 00:09:17.229 "adrfam": "IPv4", 00:09:17.229 "traddr": "10.0.0.1", 00:09:17.229 "trsvcid": "35252" 00:09:17.229 }, 00:09:17.229 "auth": { 00:09:17.229 "state": "completed", 00:09:17.229 "digest": "sha256", 00:09:17.229 "dhgroup": "ffdhe6144" 00:09:17.229 } 00:09:17.229 } 00:09:17.229 ]' 00:09:17.229 19:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:17.229 19:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:17.229 19:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:17.486 19:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:09:17.486 19:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:17.486 19:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:17.486 19:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:17.486 19:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:17.744 19:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDY1ZGEwOGRhYjUzZjE5NzI1ZjE1YWM1OWNmMDQ4ZTFjMGUzOTRhZGRlNDU1NmUwwJmTAQ==: --dhchap-ctrl-secret DHHC-1:03:YzgzZWVhZmJmNDNjOTRlODNlNmZhNDZkZGNjZTg0NWRkZjc0NWU0OTRiYWM0NzUwZWIzZjA4MTFkMWY2YTIzOTavnEE=: 00:09:17.744 19:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:00:ZDY1ZGEwOGRhYjUzZjE5NzI1ZjE1YWM1OWNmMDQ4ZTFjMGUzOTRhZGRlNDU1NmUwwJmTAQ==: --dhchap-ctrl-secret DHHC-1:03:YzgzZWVhZmJmNDNjOTRlODNlNmZhNDZkZGNjZTg0NWRkZjc0NWU0OTRiYWM0NzUwZWIzZjA4MTFkMWY2YTIzOTavnEE=: 00:09:18.307 19:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:18.307 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:18.307 19:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:09:18.307 19:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.307 19:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:18.307 19:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.307 19:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:18.307 19:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:09:18.307 19:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:09:18.564 19:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:09:18.564 19:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:18.564 19:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:18.564 19:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:09:18.564 19:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:18.564 19:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:18.564 19:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:18.564 19:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.564 19:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:18.564 19:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.564 19:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:18.564 19:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:18.564 19:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:18.821 00:09:18.821 19:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:18.821 19:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:18.821 19:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:19.079 19:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:19.079 19:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:19.079 19:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.079 19:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:19.079 19:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.079 19:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:19.079 { 00:09:19.079 "cntlid": 35, 00:09:19.079 "qid": 0, 00:09:19.079 "state": "enabled", 00:09:19.079 "thread": "nvmf_tgt_poll_group_000", 00:09:19.079 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:09:19.079 "listen_address": { 00:09:19.079 "trtype": "TCP", 00:09:19.079 "adrfam": "IPv4", 00:09:19.079 "traddr": "10.0.0.3", 00:09:19.079 "trsvcid": "4420" 00:09:19.079 }, 00:09:19.079 "peer_address": { 00:09:19.079 "trtype": "TCP", 00:09:19.079 "adrfam": "IPv4", 00:09:19.079 "traddr": "10.0.0.1", 00:09:19.079 "trsvcid": "35290" 00:09:19.079 }, 00:09:19.079 "auth": { 00:09:19.079 "state": "completed", 00:09:19.079 "digest": "sha256", 00:09:19.079 "dhgroup": "ffdhe6144" 00:09:19.079 } 00:09:19.079 } 00:09:19.079 ]' 00:09:19.079 19:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:19.079 19:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:19.079 19:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:19.079 19:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:09:19.079 19:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:19.079 19:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:19.079 19:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:19.079 19:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:19.337 19:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDU4N2QxNTAwOWNiOTA5MzQxZTYxOTA2YzRmYjMxM2Jh56XD: --dhchap-ctrl-secret DHHC-1:02:NmJlYjViMjQzYTBhYjVlMTQ1YWZjMTNhM2M3YmFjMTNlZGY3YTg4YjYwMGY1NjQzMW7RBQ==: 00:09:19.337 19:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:01:NDU4N2QxNTAwOWNiOTA5MzQxZTYxOTA2YzRmYjMxM2Jh56XD: --dhchap-ctrl-secret DHHC-1:02:NmJlYjViMjQzYTBhYjVlMTQ1YWZjMTNhM2M3YmFjMTNlZGY3YTg4YjYwMGY1NjQzMW7RBQ==: 00:09:19.903 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:19.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:19.903 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:09:19.903 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.903 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:19.903 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.903 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:19.903 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:09:19.903 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:09:20.160 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:09:20.160 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:20.160 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:20.160 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:09:20.160 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:20.160 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:20.160 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:20.160 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.160 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:20.160 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.160 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:20.160 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:20.160 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:20.418 00:09:20.418 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:20.418 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:20.418 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:20.675 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:20.675 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:20.675 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.675 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:20.675 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.675 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:20.675 { 00:09:20.675 "cntlid": 37, 00:09:20.675 "qid": 0, 00:09:20.675 "state": "enabled", 00:09:20.675 "thread": "nvmf_tgt_poll_group_000", 00:09:20.675 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:09:20.675 "listen_address": { 00:09:20.675 "trtype": "TCP", 00:09:20.675 "adrfam": "IPv4", 00:09:20.675 "traddr": "10.0.0.3", 00:09:20.675 "trsvcid": "4420" 00:09:20.675 }, 00:09:20.675 "peer_address": { 00:09:20.675 "trtype": "TCP", 00:09:20.675 "adrfam": "IPv4", 00:09:20.675 "traddr": "10.0.0.1", 00:09:20.675 "trsvcid": "35316" 00:09:20.675 }, 00:09:20.675 "auth": { 00:09:20.675 "state": "completed", 00:09:20.675 "digest": "sha256", 00:09:20.675 "dhgroup": "ffdhe6144" 00:09:20.675 } 00:09:20.675 } 00:09:20.675 ]' 00:09:20.675 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:20.675 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:20.675 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:20.675 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:09:20.675 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:20.675 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:20.675 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:20.675 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:20.932 19:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTViNGY5ZmRiYTg0YmNhNzdmNjY1Y2YxMzdlYjI5MWIzOTZlOGRmYWUxZjdhN2NjEiwkGw==: --dhchap-ctrl-secret DHHC-1:01:OGQ4OGMxZjUwODc2NWI5OTI5OGE4NDgxYmY5NWEwYjk6tdyP: 00:09:20.932 19:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:02:MTViNGY5ZmRiYTg0YmNhNzdmNjY1Y2YxMzdlYjI5MWIzOTZlOGRmYWUxZjdhN2NjEiwkGw==: --dhchap-ctrl-secret DHHC-1:01:OGQ4OGMxZjUwODc2NWI5OTI5OGE4NDgxYmY5NWEwYjk6tdyP: 00:09:21.496 19:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:21.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:21.496 19:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:09:21.496 19:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.496 19:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:21.496 19:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.496 19:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:21.496 19:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:09:21.496 19:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:09:21.753 19:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:09:21.753 19:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:21.753 19:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:21.753 19:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:09:21.753 19:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:21.753 19:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:21.753 19:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key3 00:09:21.753 19:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.753 19:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:21.753 19:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.753 19:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:21.753 19:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:21.753 19:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:22.010 00:09:22.010 19:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:22.010 19:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:22.010 19:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:22.268 19:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:22.268 19:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:22.268 19:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.268 19:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:22.268 19:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.268 19:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:22.268 { 00:09:22.268 "cntlid": 39, 00:09:22.268 "qid": 0, 00:09:22.268 "state": "enabled", 00:09:22.268 "thread": "nvmf_tgt_poll_group_000", 00:09:22.268 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:09:22.268 "listen_address": { 00:09:22.268 "trtype": "TCP", 00:09:22.268 "adrfam": "IPv4", 00:09:22.268 "traddr": "10.0.0.3", 00:09:22.268 "trsvcid": "4420" 00:09:22.268 }, 00:09:22.268 "peer_address": { 00:09:22.268 "trtype": "TCP", 00:09:22.268 "adrfam": "IPv4", 00:09:22.268 "traddr": "10.0.0.1", 00:09:22.268 "trsvcid": "35332" 00:09:22.268 }, 00:09:22.268 "auth": { 00:09:22.268 "state": "completed", 00:09:22.268 "digest": "sha256", 00:09:22.268 "dhgroup": "ffdhe6144" 00:09:22.268 } 00:09:22.268 } 00:09:22.268 ]' 00:09:22.268 19:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:22.268 19:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:22.268 19:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:22.268 19:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:09:22.268 19:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:22.268 19:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:22.268 19:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:22.268 19:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:22.526 19:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjBhNTE4OThjOWM1MjBjNzA0MjE0MDU1NWU5YWE2MmQyOTZmNWYyMTcwOWZlZmUzMTM2YzdhZmY2ZWYwNDY3NMoRYKs=: 00:09:22.526 19:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:03:ZjBhNTE4OThjOWM1MjBjNzA0MjE0MDU1NWU5YWE2MmQyOTZmNWYyMTcwOWZlZmUzMTM2YzdhZmY2ZWYwNDY3NMoRYKs=: 00:09:23.458 19:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:23.458 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:23.458 19:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:09:23.458 19:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.458 19:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:23.458 19:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.458 19:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:23.458 19:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:23.458 19:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:09:23.458 19:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:09:23.458 19:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:09:23.458 19:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:23.458 19:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:23.458 19:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:09:23.458 19:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:23.458 19:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:23.458 19:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:23.458 19:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.458 19:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:23.458 19:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.458 19:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:23.458 19:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:23.458 19:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:24.023 00:09:24.023 19:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:24.023 19:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:24.023 19:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:24.281 19:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:24.281 19:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:24.281 19:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.281 19:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:24.281 19:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.281 19:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:24.281 { 00:09:24.281 "cntlid": 41, 00:09:24.281 "qid": 0, 00:09:24.281 "state": "enabled", 00:09:24.281 "thread": "nvmf_tgt_poll_group_000", 00:09:24.281 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:09:24.281 "listen_address": { 00:09:24.281 "trtype": "TCP", 00:09:24.281 "adrfam": "IPv4", 00:09:24.281 "traddr": "10.0.0.3", 00:09:24.281 "trsvcid": "4420" 00:09:24.281 }, 00:09:24.281 "peer_address": { 00:09:24.281 "trtype": "TCP", 00:09:24.281 "adrfam": "IPv4", 00:09:24.281 "traddr": "10.0.0.1", 00:09:24.281 "trsvcid": "51282" 00:09:24.281 }, 00:09:24.281 "auth": { 00:09:24.281 "state": "completed", 00:09:24.281 "digest": "sha256", 00:09:24.281 "dhgroup": "ffdhe8192" 00:09:24.281 } 00:09:24.281 } 00:09:24.281 ]' 00:09:24.281 19:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:24.281 19:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:24.281 19:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:24.282 19:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:09:24.282 19:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:24.282 19:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:24.282 19:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:24.282 19:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:24.539 19:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDY1ZGEwOGRhYjUzZjE5NzI1ZjE1YWM1OWNmMDQ4ZTFjMGUzOTRhZGRlNDU1NmUwwJmTAQ==: --dhchap-ctrl-secret DHHC-1:03:YzgzZWVhZmJmNDNjOTRlODNlNmZhNDZkZGNjZTg0NWRkZjc0NWU0OTRiYWM0NzUwZWIzZjA4MTFkMWY2YTIzOTavnEE=: 00:09:24.539 19:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:00:ZDY1ZGEwOGRhYjUzZjE5NzI1ZjE1YWM1OWNmMDQ4ZTFjMGUzOTRhZGRlNDU1NmUwwJmTAQ==: --dhchap-ctrl-secret DHHC-1:03:YzgzZWVhZmJmNDNjOTRlODNlNmZhNDZkZGNjZTg0NWRkZjc0NWU0OTRiYWM0NzUwZWIzZjA4MTFkMWY2YTIzOTavnEE=: 00:09:25.104 19:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:25.104 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:25.104 19:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:09:25.104 19:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.104 19:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:25.104 19:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.104 19:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:25.104 19:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:09:25.104 19:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:09:25.361 19:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:09:25.361 19:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:25.361 19:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:25.361 19:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:09:25.361 19:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:25.361 19:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:25.361 19:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:25.361 19:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.361 19:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:25.361 19:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.361 19:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:25.361 19:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:25.361 19:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:25.927 00:09:25.927 19:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:25.927 19:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:25.927 19:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:25.927 19:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:25.927 19:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:25.927 19:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.927 19:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:25.927 19:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.927 19:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:25.927 { 00:09:25.927 "cntlid": 43, 00:09:25.927 "qid": 0, 00:09:25.927 "state": "enabled", 00:09:25.927 "thread": "nvmf_tgt_poll_group_000", 00:09:25.927 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:09:25.927 "listen_address": { 00:09:25.927 "trtype": "TCP", 00:09:25.927 "adrfam": "IPv4", 00:09:25.927 "traddr": "10.0.0.3", 00:09:25.927 "trsvcid": "4420" 00:09:25.927 }, 00:09:25.927 "peer_address": { 00:09:25.927 "trtype": "TCP", 00:09:25.927 "adrfam": "IPv4", 00:09:25.927 "traddr": "10.0.0.1", 00:09:25.927 "trsvcid": "51290" 00:09:25.927 }, 00:09:25.927 "auth": { 00:09:25.927 "state": "completed", 00:09:25.927 "digest": "sha256", 00:09:25.927 "dhgroup": "ffdhe8192" 00:09:25.927 } 00:09:25.927 } 00:09:25.927 ]' 00:09:25.927 19:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:26.185 19:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:26.185 19:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:26.185 19:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:09:26.185 19:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:26.185 19:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:26.185 19:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:26.185 19:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:26.512 19:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDU4N2QxNTAwOWNiOTA5MzQxZTYxOTA2YzRmYjMxM2Jh56XD: --dhchap-ctrl-secret DHHC-1:02:NmJlYjViMjQzYTBhYjVlMTQ1YWZjMTNhM2M3YmFjMTNlZGY3YTg4YjYwMGY1NjQzMW7RBQ==: 00:09:26.512 19:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:01:NDU4N2QxNTAwOWNiOTA5MzQxZTYxOTA2YzRmYjMxM2Jh56XD: --dhchap-ctrl-secret DHHC-1:02:NmJlYjViMjQzYTBhYjVlMTQ1YWZjMTNhM2M3YmFjMTNlZGY3YTg4YjYwMGY1NjQzMW7RBQ==: 00:09:27.079 19:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:27.079 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:27.079 19:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:09:27.079 19:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.079 19:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:27.079 19:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.079 19:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:27.079 19:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:09:27.079 19:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:09:27.337 19:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:09:27.337 19:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:27.337 19:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:27.337 19:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:09:27.337 19:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:27.337 19:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:27.337 19:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:27.337 19:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.337 19:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:27.337 19:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.337 19:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:27.337 19:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:27.337 19:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:27.900 00:09:27.900 19:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:27.900 19:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:27.900 19:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:28.159 19:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:28.159 19:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:28.159 19:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.159 19:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:28.159 19:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.159 19:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:28.159 { 00:09:28.159 "cntlid": 45, 00:09:28.159 "qid": 0, 00:09:28.159 "state": "enabled", 00:09:28.159 "thread": "nvmf_tgt_poll_group_000", 00:09:28.159 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:09:28.159 "listen_address": { 00:09:28.159 "trtype": "TCP", 00:09:28.159 "adrfam": "IPv4", 00:09:28.159 "traddr": "10.0.0.3", 00:09:28.159 "trsvcid": "4420" 00:09:28.159 }, 00:09:28.159 "peer_address": { 00:09:28.159 "trtype": "TCP", 00:09:28.159 "adrfam": "IPv4", 00:09:28.159 "traddr": "10.0.0.1", 00:09:28.159 "trsvcid": "51332" 00:09:28.159 }, 00:09:28.159 "auth": { 00:09:28.159 "state": "completed", 00:09:28.159 "digest": "sha256", 00:09:28.159 "dhgroup": "ffdhe8192" 00:09:28.159 } 00:09:28.159 } 00:09:28.159 ]' 00:09:28.159 19:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:28.159 19:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:28.159 19:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:28.159 19:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:09:28.159 19:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:28.159 19:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:28.159 19:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:28.159 19:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:28.416 19:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTViNGY5ZmRiYTg0YmNhNzdmNjY1Y2YxMzdlYjI5MWIzOTZlOGRmYWUxZjdhN2NjEiwkGw==: --dhchap-ctrl-secret DHHC-1:01:OGQ4OGMxZjUwODc2NWI5OTI5OGE4NDgxYmY5NWEwYjk6tdyP: 00:09:28.417 19:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:02:MTViNGY5ZmRiYTg0YmNhNzdmNjY1Y2YxMzdlYjI5MWIzOTZlOGRmYWUxZjdhN2NjEiwkGw==: --dhchap-ctrl-secret DHHC-1:01:OGQ4OGMxZjUwODc2NWI5OTI5OGE4NDgxYmY5NWEwYjk6tdyP: 00:09:28.982 19:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:28.983 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:28.983 19:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:09:28.983 19:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.983 19:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:28.983 19:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.983 19:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:28.983 19:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:09:28.983 19:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:09:29.241 19:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:09:29.241 19:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:29.241 19:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:29.241 19:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:09:29.241 19:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:29.241 19:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:29.241 19:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key3 00:09:29.241 19:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.241 19:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:29.241 19:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.241 19:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:29.241 19:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:29.241 19:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:29.806 00:09:29.806 19:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:29.806 19:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:29.806 19:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:30.063 19:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:30.063 19:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:30.063 19:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.063 19:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:30.063 19:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.063 19:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:30.063 { 00:09:30.063 "cntlid": 47, 00:09:30.063 "qid": 0, 00:09:30.063 "state": "enabled", 00:09:30.063 "thread": "nvmf_tgt_poll_group_000", 00:09:30.063 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:09:30.063 "listen_address": { 00:09:30.063 "trtype": "TCP", 00:09:30.063 "adrfam": "IPv4", 00:09:30.063 "traddr": "10.0.0.3", 00:09:30.063 "trsvcid": "4420" 00:09:30.063 }, 00:09:30.063 "peer_address": { 00:09:30.063 "trtype": "TCP", 00:09:30.063 "adrfam": "IPv4", 00:09:30.063 "traddr": "10.0.0.1", 00:09:30.063 "trsvcid": "51348" 00:09:30.063 }, 00:09:30.063 "auth": { 00:09:30.064 "state": "completed", 00:09:30.064 "digest": "sha256", 00:09:30.064 "dhgroup": "ffdhe8192" 00:09:30.064 } 00:09:30.064 } 00:09:30.064 ]' 00:09:30.064 19:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:30.064 19:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:30.064 19:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:30.064 19:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:09:30.064 19:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:30.064 19:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:30.064 19:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:30.064 19:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:30.321 19:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjBhNTE4OThjOWM1MjBjNzA0MjE0MDU1NWU5YWE2MmQyOTZmNWYyMTcwOWZlZmUzMTM2YzdhZmY2ZWYwNDY3NMoRYKs=: 00:09:30.321 19:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:03:ZjBhNTE4OThjOWM1MjBjNzA0MjE0MDU1NWU5YWE2MmQyOTZmNWYyMTcwOWZlZmUzMTM2YzdhZmY2ZWYwNDY3NMoRYKs=: 00:09:30.887 19:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:30.887 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:30.887 19:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:09:30.887 19:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.887 19:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:30.887 19:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.887 19:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:09:30.887 19:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:30.887 19:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:30.887 19:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:09:30.887 19:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:09:31.143 19:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:09:31.143 19:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:31.143 19:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:09:31.143 19:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:31.143 19:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:31.143 19:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:31.143 19:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:31.143 19:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.143 19:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:31.143 19:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.143 19:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:31.143 19:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:31.143 19:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:31.401 00:09:31.401 19:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:31.401 19:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:31.401 19:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:31.659 19:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:31.659 19:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:31.659 19:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.659 19:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:31.659 19:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.659 19:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:31.659 { 00:09:31.659 "cntlid": 49, 00:09:31.659 "qid": 0, 00:09:31.659 "state": "enabled", 00:09:31.659 "thread": "nvmf_tgt_poll_group_000", 00:09:31.659 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:09:31.659 "listen_address": { 00:09:31.659 "trtype": "TCP", 00:09:31.659 "adrfam": "IPv4", 00:09:31.659 "traddr": "10.0.0.3", 00:09:31.659 "trsvcid": "4420" 00:09:31.659 }, 00:09:31.659 "peer_address": { 00:09:31.659 "trtype": "TCP", 00:09:31.659 "adrfam": "IPv4", 00:09:31.659 "traddr": "10.0.0.1", 00:09:31.659 "trsvcid": "51384" 00:09:31.659 }, 00:09:31.659 "auth": { 00:09:31.659 "state": "completed", 00:09:31.659 "digest": "sha384", 00:09:31.659 "dhgroup": "null" 00:09:31.659 } 00:09:31.659 } 00:09:31.659 ]' 00:09:31.659 19:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:31.659 19:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:09:31.659 19:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:31.659 19:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:31.659 19:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:31.659 19:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:31.659 19:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:31.659 19:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:31.915 19:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDY1ZGEwOGRhYjUzZjE5NzI1ZjE1YWM1OWNmMDQ4ZTFjMGUzOTRhZGRlNDU1NmUwwJmTAQ==: --dhchap-ctrl-secret DHHC-1:03:YzgzZWVhZmJmNDNjOTRlODNlNmZhNDZkZGNjZTg0NWRkZjc0NWU0OTRiYWM0NzUwZWIzZjA4MTFkMWY2YTIzOTavnEE=: 00:09:31.915 19:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:00:ZDY1ZGEwOGRhYjUzZjE5NzI1ZjE1YWM1OWNmMDQ4ZTFjMGUzOTRhZGRlNDU1NmUwwJmTAQ==: --dhchap-ctrl-secret DHHC-1:03:YzgzZWVhZmJmNDNjOTRlODNlNmZhNDZkZGNjZTg0NWRkZjc0NWU0OTRiYWM0NzUwZWIzZjA4MTFkMWY2YTIzOTavnEE=: 00:09:32.480 19:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:32.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:32.480 19:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:09:32.480 19:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.480 19:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:32.480 19:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.480 19:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:32.480 19:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:09:32.480 19:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:09:32.738 19:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:09:32.738 19:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:32.738 19:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:09:32.738 19:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:32.738 19:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:32.738 19:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:32.738 19:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:32.738 19:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.738 19:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:32.738 19:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.738 19:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:32.738 19:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:32.738 19:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:32.996 00:09:32.996 19:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:32.996 19:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:32.996 19:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:33.253 19:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:33.253 19:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:33.253 19:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.253 19:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:33.253 19:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.253 19:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:33.253 { 00:09:33.253 "cntlid": 51, 00:09:33.253 "qid": 0, 00:09:33.253 "state": "enabled", 00:09:33.253 "thread": "nvmf_tgt_poll_group_000", 00:09:33.253 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:09:33.253 "listen_address": { 00:09:33.253 "trtype": "TCP", 00:09:33.253 "adrfam": "IPv4", 00:09:33.253 "traddr": "10.0.0.3", 00:09:33.253 "trsvcid": "4420" 00:09:33.253 }, 00:09:33.253 "peer_address": { 00:09:33.253 "trtype": "TCP", 00:09:33.253 "adrfam": "IPv4", 00:09:33.253 "traddr": "10.0.0.1", 00:09:33.253 "trsvcid": "35818" 00:09:33.253 }, 00:09:33.253 "auth": { 00:09:33.253 "state": "completed", 00:09:33.253 "digest": "sha384", 00:09:33.253 "dhgroup": "null" 00:09:33.253 } 00:09:33.253 } 00:09:33.253 ]' 00:09:33.253 19:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:33.253 19:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:09:33.253 19:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:33.253 19:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:33.253 19:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:33.253 19:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:33.253 19:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:33.253 19:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:33.510 19:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDU4N2QxNTAwOWNiOTA5MzQxZTYxOTA2YzRmYjMxM2Jh56XD: --dhchap-ctrl-secret DHHC-1:02:NmJlYjViMjQzYTBhYjVlMTQ1YWZjMTNhM2M3YmFjMTNlZGY3YTg4YjYwMGY1NjQzMW7RBQ==: 00:09:33.510 19:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:01:NDU4N2QxNTAwOWNiOTA5MzQxZTYxOTA2YzRmYjMxM2Jh56XD: --dhchap-ctrl-secret DHHC-1:02:NmJlYjViMjQzYTBhYjVlMTQ1YWZjMTNhM2M3YmFjMTNlZGY3YTg4YjYwMGY1NjQzMW7RBQ==: 00:09:34.081 19:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:34.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:34.081 19:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:09:34.081 19:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.081 19:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:34.081 19:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.081 19:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:34.081 19:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:09:34.081 19:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:09:34.339 19:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:09:34.339 19:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:34.339 19:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:09:34.339 19:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:34.339 19:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:34.339 19:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:34.339 19:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:34.339 19:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.339 19:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:34.339 19:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.339 19:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:34.339 19:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:34.339 19:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:34.597 00:09:34.597 19:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:34.597 19:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:34.597 19:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:34.855 19:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:34.855 19:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:34.855 19:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.855 19:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:34.855 19:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.855 19:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:34.855 { 00:09:34.855 "cntlid": 53, 00:09:34.855 "qid": 0, 00:09:34.855 "state": "enabled", 00:09:34.855 "thread": "nvmf_tgt_poll_group_000", 00:09:34.855 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:09:34.855 "listen_address": { 00:09:34.855 "trtype": "TCP", 00:09:34.855 "adrfam": "IPv4", 00:09:34.855 "traddr": "10.0.0.3", 00:09:34.855 "trsvcid": "4420" 00:09:34.855 }, 00:09:34.855 "peer_address": { 00:09:34.855 "trtype": "TCP", 00:09:34.855 "adrfam": "IPv4", 00:09:34.855 "traddr": "10.0.0.1", 00:09:34.855 "trsvcid": "35842" 00:09:34.855 }, 00:09:34.855 "auth": { 00:09:34.855 "state": "completed", 00:09:34.855 "digest": "sha384", 00:09:34.855 "dhgroup": "null" 00:09:34.855 } 00:09:34.855 } 00:09:34.855 ]' 00:09:34.855 19:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:34.855 19:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:09:34.855 19:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:34.855 19:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:34.855 19:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:35.112 19:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:35.112 19:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:35.112 19:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:35.112 19:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTViNGY5ZmRiYTg0YmNhNzdmNjY1Y2YxMzdlYjI5MWIzOTZlOGRmYWUxZjdhN2NjEiwkGw==: --dhchap-ctrl-secret DHHC-1:01:OGQ4OGMxZjUwODc2NWI5OTI5OGE4NDgxYmY5NWEwYjk6tdyP: 00:09:35.112 19:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:02:MTViNGY5ZmRiYTg0YmNhNzdmNjY1Y2YxMzdlYjI5MWIzOTZlOGRmYWUxZjdhN2NjEiwkGw==: --dhchap-ctrl-secret DHHC-1:01:OGQ4OGMxZjUwODc2NWI5OTI5OGE4NDgxYmY5NWEwYjk6tdyP: 00:09:36.046 19:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:36.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:36.046 19:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:09:36.046 19:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.046 19:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:36.046 19:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.046 19:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:36.046 19:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:09:36.046 19:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:09:36.046 19:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:09:36.046 19:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:36.046 19:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:09:36.046 19:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:36.046 19:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:36.046 19:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:36.046 19:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key3 00:09:36.046 19:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.046 19:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:36.046 19:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.046 19:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:36.046 19:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:36.046 19:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:36.303 00:09:36.303 19:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:36.303 19:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:36.303 19:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:36.561 19:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:36.561 19:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:36.561 19:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.561 19:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:36.561 19:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.561 19:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:36.561 { 00:09:36.561 "cntlid": 55, 00:09:36.561 "qid": 0, 00:09:36.561 "state": "enabled", 00:09:36.561 "thread": "nvmf_tgt_poll_group_000", 00:09:36.561 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:09:36.561 "listen_address": { 00:09:36.561 "trtype": "TCP", 00:09:36.561 "adrfam": "IPv4", 00:09:36.561 "traddr": "10.0.0.3", 00:09:36.561 "trsvcid": "4420" 00:09:36.561 }, 00:09:36.561 "peer_address": { 00:09:36.561 "trtype": "TCP", 00:09:36.561 "adrfam": "IPv4", 00:09:36.561 "traddr": "10.0.0.1", 00:09:36.561 "trsvcid": "35862" 00:09:36.561 }, 00:09:36.561 "auth": { 00:09:36.561 "state": "completed", 00:09:36.561 "digest": "sha384", 00:09:36.561 "dhgroup": "null" 00:09:36.561 } 00:09:36.561 } 00:09:36.561 ]' 00:09:36.561 19:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:36.561 19:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:09:36.561 19:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:36.561 19:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:36.561 19:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:36.819 19:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:36.819 19:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:36.819 19:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:36.819 19:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjBhNTE4OThjOWM1MjBjNzA0MjE0MDU1NWU5YWE2MmQyOTZmNWYyMTcwOWZlZmUzMTM2YzdhZmY2ZWYwNDY3NMoRYKs=: 00:09:36.819 19:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:03:ZjBhNTE4OThjOWM1MjBjNzA0MjE0MDU1NWU5YWE2MmQyOTZmNWYyMTcwOWZlZmUzMTM2YzdhZmY2ZWYwNDY3NMoRYKs=: 00:09:37.399 19:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:37.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:37.399 19:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:09:37.399 19:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.399 19:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:37.656 19:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.656 19:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:37.656 19:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:37.656 19:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:09:37.656 19:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:09:37.656 19:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:09:37.656 19:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:37.656 19:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:09:37.656 19:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:37.656 19:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:37.656 19:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:37.656 19:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:37.656 19:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.656 19:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:37.656 19:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.656 19:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:37.656 19:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:37.656 19:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:37.913 00:09:37.913 19:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:37.913 19:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:37.913 19:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:38.171 19:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:38.171 19:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:38.171 19:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.171 19:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:38.171 19:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.171 19:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:38.171 { 00:09:38.171 "cntlid": 57, 00:09:38.171 "qid": 0, 00:09:38.171 "state": "enabled", 00:09:38.171 "thread": "nvmf_tgt_poll_group_000", 00:09:38.171 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:09:38.171 "listen_address": { 00:09:38.171 "trtype": "TCP", 00:09:38.171 "adrfam": "IPv4", 00:09:38.171 "traddr": "10.0.0.3", 00:09:38.171 "trsvcid": "4420" 00:09:38.171 }, 00:09:38.171 "peer_address": { 00:09:38.171 "trtype": "TCP", 00:09:38.171 "adrfam": "IPv4", 00:09:38.171 "traddr": "10.0.0.1", 00:09:38.171 "trsvcid": "35884" 00:09:38.171 }, 00:09:38.171 "auth": { 00:09:38.171 "state": "completed", 00:09:38.171 "digest": "sha384", 00:09:38.171 "dhgroup": "ffdhe2048" 00:09:38.171 } 00:09:38.171 } 00:09:38.171 ]' 00:09:38.171 19:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:38.171 19:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:09:38.171 19:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:38.430 19:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:38.430 19:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:38.430 19:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:38.430 19:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:38.430 19:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:38.688 19:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDY1ZGEwOGRhYjUzZjE5NzI1ZjE1YWM1OWNmMDQ4ZTFjMGUzOTRhZGRlNDU1NmUwwJmTAQ==: --dhchap-ctrl-secret DHHC-1:03:YzgzZWVhZmJmNDNjOTRlODNlNmZhNDZkZGNjZTg0NWRkZjc0NWU0OTRiYWM0NzUwZWIzZjA4MTFkMWY2YTIzOTavnEE=: 00:09:38.688 19:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:00:ZDY1ZGEwOGRhYjUzZjE5NzI1ZjE1YWM1OWNmMDQ4ZTFjMGUzOTRhZGRlNDU1NmUwwJmTAQ==: --dhchap-ctrl-secret DHHC-1:03:YzgzZWVhZmJmNDNjOTRlODNlNmZhNDZkZGNjZTg0NWRkZjc0NWU0OTRiYWM0NzUwZWIzZjA4MTFkMWY2YTIzOTavnEE=: 00:09:39.254 19:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:39.254 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:39.254 19:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:09:39.254 19:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.254 19:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:39.254 19:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.254 19:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:39.254 19:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:09:39.254 19:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:09:39.511 19:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:09:39.511 19:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:39.512 19:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:09:39.512 19:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:39.512 19:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:39.512 19:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:39.512 19:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:39.512 19:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.512 19:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:39.512 19:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.512 19:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:39.512 19:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:39.512 19:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:39.769 00:09:39.769 19:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:39.769 19:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:39.769 19:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:40.026 19:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:40.026 19:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:40.026 19:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.026 19:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:40.026 19:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.026 19:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:40.026 { 00:09:40.026 "cntlid": 59, 00:09:40.026 "qid": 0, 00:09:40.026 "state": "enabled", 00:09:40.026 "thread": "nvmf_tgt_poll_group_000", 00:09:40.026 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:09:40.026 "listen_address": { 00:09:40.026 "trtype": "TCP", 00:09:40.026 "adrfam": "IPv4", 00:09:40.026 "traddr": "10.0.0.3", 00:09:40.026 "trsvcid": "4420" 00:09:40.026 }, 00:09:40.026 "peer_address": { 00:09:40.026 "trtype": "TCP", 00:09:40.026 "adrfam": "IPv4", 00:09:40.026 "traddr": "10.0.0.1", 00:09:40.026 "trsvcid": "35922" 00:09:40.026 }, 00:09:40.026 "auth": { 00:09:40.026 "state": "completed", 00:09:40.026 "digest": "sha384", 00:09:40.026 "dhgroup": "ffdhe2048" 00:09:40.026 } 00:09:40.026 } 00:09:40.026 ]' 00:09:40.026 19:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:40.026 19:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:09:40.026 19:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:40.026 19:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:40.026 19:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:40.026 19:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:40.026 19:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:40.026 19:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:40.284 19:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDU4N2QxNTAwOWNiOTA5MzQxZTYxOTA2YzRmYjMxM2Jh56XD: --dhchap-ctrl-secret DHHC-1:02:NmJlYjViMjQzYTBhYjVlMTQ1YWZjMTNhM2M3YmFjMTNlZGY3YTg4YjYwMGY1NjQzMW7RBQ==: 00:09:40.284 19:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:01:NDU4N2QxNTAwOWNiOTA5MzQxZTYxOTA2YzRmYjMxM2Jh56XD: --dhchap-ctrl-secret DHHC-1:02:NmJlYjViMjQzYTBhYjVlMTQ1YWZjMTNhM2M3YmFjMTNlZGY3YTg4YjYwMGY1NjQzMW7RBQ==: 00:09:40.892 19:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:40.892 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:40.892 19:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:09:40.892 19:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.892 19:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:40.892 19:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.892 19:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:40.892 19:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:09:40.892 19:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:09:40.892 19:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:09:40.892 19:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:40.892 19:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:09:40.892 19:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:40.892 19:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:40.892 19:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:40.892 19:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:40.892 19:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.892 19:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:40.892 19:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.892 19:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:40.892 19:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:40.892 19:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:41.150 00:09:41.150 19:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:41.150 19:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:41.150 19:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:41.408 19:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:41.408 19:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:41.408 19:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.408 19:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:41.408 19:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.408 19:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:41.408 { 00:09:41.408 "cntlid": 61, 00:09:41.408 "qid": 0, 00:09:41.408 "state": "enabled", 00:09:41.408 "thread": "nvmf_tgt_poll_group_000", 00:09:41.408 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:09:41.408 "listen_address": { 00:09:41.408 "trtype": "TCP", 00:09:41.408 "adrfam": "IPv4", 00:09:41.408 "traddr": "10.0.0.3", 00:09:41.408 "trsvcid": "4420" 00:09:41.408 }, 00:09:41.408 "peer_address": { 00:09:41.408 "trtype": "TCP", 00:09:41.408 "adrfam": "IPv4", 00:09:41.408 "traddr": "10.0.0.1", 00:09:41.408 "trsvcid": "35936" 00:09:41.408 }, 00:09:41.408 "auth": { 00:09:41.408 "state": "completed", 00:09:41.408 "digest": "sha384", 00:09:41.408 "dhgroup": "ffdhe2048" 00:09:41.408 } 00:09:41.408 } 00:09:41.408 ]' 00:09:41.408 19:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:41.408 19:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:09:41.408 19:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:41.408 19:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:41.408 19:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:41.666 19:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:41.666 19:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:41.666 19:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:41.666 19:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTViNGY5ZmRiYTg0YmNhNzdmNjY1Y2YxMzdlYjI5MWIzOTZlOGRmYWUxZjdhN2NjEiwkGw==: --dhchap-ctrl-secret DHHC-1:01:OGQ4OGMxZjUwODc2NWI5OTI5OGE4NDgxYmY5NWEwYjk6tdyP: 00:09:41.666 19:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:02:MTViNGY5ZmRiYTg0YmNhNzdmNjY1Y2YxMzdlYjI5MWIzOTZlOGRmYWUxZjdhN2NjEiwkGw==: --dhchap-ctrl-secret DHHC-1:01:OGQ4OGMxZjUwODc2NWI5OTI5OGE4NDgxYmY5NWEwYjk6tdyP: 00:09:42.599 19:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:42.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:42.599 19:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:09:42.599 19:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.599 19:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:42.599 19:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.599 19:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:42.599 19:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:09:42.599 19:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:09:42.599 19:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:09:42.599 19:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:42.599 19:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:09:42.599 19:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:42.599 19:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:42.599 19:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:42.599 19:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key3 00:09:42.599 19:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.599 19:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:42.599 19:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.599 19:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:42.599 19:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:42.599 19:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:42.857 00:09:42.857 19:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:42.857 19:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:42.857 19:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:43.114 19:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:43.114 19:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:43.114 19:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.114 19:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:43.114 19:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.114 19:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:43.114 { 00:09:43.114 "cntlid": 63, 00:09:43.114 "qid": 0, 00:09:43.114 "state": "enabled", 00:09:43.114 "thread": "nvmf_tgt_poll_group_000", 00:09:43.114 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:09:43.114 "listen_address": { 00:09:43.114 "trtype": "TCP", 00:09:43.114 "adrfam": "IPv4", 00:09:43.114 "traddr": "10.0.0.3", 00:09:43.114 "trsvcid": "4420" 00:09:43.114 }, 00:09:43.114 "peer_address": { 00:09:43.114 "trtype": "TCP", 00:09:43.114 "adrfam": "IPv4", 00:09:43.114 "traddr": "10.0.0.1", 00:09:43.114 "trsvcid": "52458" 00:09:43.114 }, 00:09:43.114 "auth": { 00:09:43.114 "state": "completed", 00:09:43.114 "digest": "sha384", 00:09:43.114 "dhgroup": "ffdhe2048" 00:09:43.114 } 00:09:43.114 } 00:09:43.114 ]' 00:09:43.114 19:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:43.114 19:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:09:43.114 19:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:43.114 19:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:43.114 19:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:43.114 19:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:43.114 19:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:43.114 19:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:43.373 19:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjBhNTE4OThjOWM1MjBjNzA0MjE0MDU1NWU5YWE2MmQyOTZmNWYyMTcwOWZlZmUzMTM2YzdhZmY2ZWYwNDY3NMoRYKs=: 00:09:43.374 19:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:03:ZjBhNTE4OThjOWM1MjBjNzA0MjE0MDU1NWU5YWE2MmQyOTZmNWYyMTcwOWZlZmUzMTM2YzdhZmY2ZWYwNDY3NMoRYKs=: 00:09:43.947 19:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:43.947 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:43.947 19:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:09:43.947 19:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.947 19:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:43.947 19:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.947 19:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:43.947 19:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:43.947 19:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:09:43.947 19:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:09:44.204 19:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:09:44.204 19:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:44.204 19:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:09:44.204 19:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:09:44.204 19:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:44.204 19:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:44.204 19:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:44.204 19:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.204 19:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:44.204 19:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.204 19:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:44.204 19:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:44.204 19:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:44.461 00:09:44.461 19:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:44.461 19:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:44.461 19:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:44.719 19:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:44.719 19:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:44.719 19:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.719 19:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:44.719 19:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.719 19:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:44.719 { 00:09:44.719 "cntlid": 65, 00:09:44.719 "qid": 0, 00:09:44.719 "state": "enabled", 00:09:44.719 "thread": "nvmf_tgt_poll_group_000", 00:09:44.719 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:09:44.719 "listen_address": { 00:09:44.719 "trtype": "TCP", 00:09:44.719 "adrfam": "IPv4", 00:09:44.719 "traddr": "10.0.0.3", 00:09:44.719 "trsvcid": "4420" 00:09:44.719 }, 00:09:44.719 "peer_address": { 00:09:44.719 "trtype": "TCP", 00:09:44.719 "adrfam": "IPv4", 00:09:44.719 "traddr": "10.0.0.1", 00:09:44.719 "trsvcid": "52474" 00:09:44.719 }, 00:09:44.719 "auth": { 00:09:44.719 "state": "completed", 00:09:44.719 "digest": "sha384", 00:09:44.719 "dhgroup": "ffdhe3072" 00:09:44.719 } 00:09:44.719 } 00:09:44.719 ]' 00:09:44.719 19:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:44.719 19:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:09:44.719 19:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:44.719 19:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:09:44.719 19:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:44.719 19:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:44.719 19:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:44.719 19:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:44.976 19:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDY1ZGEwOGRhYjUzZjE5NzI1ZjE1YWM1OWNmMDQ4ZTFjMGUzOTRhZGRlNDU1NmUwwJmTAQ==: --dhchap-ctrl-secret DHHC-1:03:YzgzZWVhZmJmNDNjOTRlODNlNmZhNDZkZGNjZTg0NWRkZjc0NWU0OTRiYWM0NzUwZWIzZjA4MTFkMWY2YTIzOTavnEE=: 00:09:44.976 19:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:00:ZDY1ZGEwOGRhYjUzZjE5NzI1ZjE1YWM1OWNmMDQ4ZTFjMGUzOTRhZGRlNDU1NmUwwJmTAQ==: --dhchap-ctrl-secret DHHC-1:03:YzgzZWVhZmJmNDNjOTRlODNlNmZhNDZkZGNjZTg0NWRkZjc0NWU0OTRiYWM0NzUwZWIzZjA4MTFkMWY2YTIzOTavnEE=: 00:09:45.542 19:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:45.542 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:45.542 19:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:09:45.542 19:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.542 19:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:45.542 19:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.542 19:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:45.542 19:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:09:45.542 19:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:09:45.800 19:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:09:45.800 19:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:45.800 19:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:09:45.800 19:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:09:45.800 19:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:45.800 19:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:45.800 19:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:45.800 19:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.800 19:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:45.800 19:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.801 19:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:45.801 19:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:45.801 19:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:46.058 00:09:46.058 19:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:46.058 19:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:46.058 19:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:46.315 19:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:46.315 19:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:46.315 19:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.315 19:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:46.315 19:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.315 19:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:46.315 { 00:09:46.315 "cntlid": 67, 00:09:46.315 "qid": 0, 00:09:46.315 "state": "enabled", 00:09:46.315 "thread": "nvmf_tgt_poll_group_000", 00:09:46.315 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:09:46.315 "listen_address": { 00:09:46.315 "trtype": "TCP", 00:09:46.315 "adrfam": "IPv4", 00:09:46.315 "traddr": "10.0.0.3", 00:09:46.315 "trsvcid": "4420" 00:09:46.315 }, 00:09:46.315 "peer_address": { 00:09:46.315 "trtype": "TCP", 00:09:46.315 "adrfam": "IPv4", 00:09:46.315 "traddr": "10.0.0.1", 00:09:46.315 "trsvcid": "52484" 00:09:46.315 }, 00:09:46.315 "auth": { 00:09:46.315 "state": "completed", 00:09:46.316 "digest": "sha384", 00:09:46.316 "dhgroup": "ffdhe3072" 00:09:46.316 } 00:09:46.316 } 00:09:46.316 ]' 00:09:46.316 19:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:46.316 19:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:09:46.316 19:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:46.316 19:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:09:46.316 19:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:46.316 19:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:46.316 19:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:46.316 19:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:46.573 19:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDU4N2QxNTAwOWNiOTA5MzQxZTYxOTA2YzRmYjMxM2Jh56XD: --dhchap-ctrl-secret DHHC-1:02:NmJlYjViMjQzYTBhYjVlMTQ1YWZjMTNhM2M3YmFjMTNlZGY3YTg4YjYwMGY1NjQzMW7RBQ==: 00:09:46.573 19:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:01:NDU4N2QxNTAwOWNiOTA5MzQxZTYxOTA2YzRmYjMxM2Jh56XD: --dhchap-ctrl-secret DHHC-1:02:NmJlYjViMjQzYTBhYjVlMTQ1YWZjMTNhM2M3YmFjMTNlZGY3YTg4YjYwMGY1NjQzMW7RBQ==: 00:09:47.139 19:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:47.139 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:47.139 19:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:09:47.139 19:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.139 19:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:47.139 19:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.139 19:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:47.139 19:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:09:47.139 19:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:09:47.397 19:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:09:47.397 19:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:47.397 19:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:09:47.397 19:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:09:47.397 19:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:47.397 19:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:47.397 19:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:47.397 19:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.397 19:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:47.397 19:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.397 19:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:47.397 19:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:47.397 19:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:47.655 00:09:47.656 19:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:47.656 19:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:47.656 19:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:47.914 19:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:47.914 19:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:47.914 19:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.914 19:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:47.914 19:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.914 19:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:47.914 { 00:09:47.914 "cntlid": 69, 00:09:47.914 "qid": 0, 00:09:47.914 "state": "enabled", 00:09:47.914 "thread": "nvmf_tgt_poll_group_000", 00:09:47.914 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:09:47.914 "listen_address": { 00:09:47.914 "trtype": "TCP", 00:09:47.914 "adrfam": "IPv4", 00:09:47.914 "traddr": "10.0.0.3", 00:09:47.914 "trsvcid": "4420" 00:09:47.914 }, 00:09:47.914 "peer_address": { 00:09:47.914 "trtype": "TCP", 00:09:47.914 "adrfam": "IPv4", 00:09:47.914 "traddr": "10.0.0.1", 00:09:47.914 "trsvcid": "52528" 00:09:47.914 }, 00:09:47.914 "auth": { 00:09:47.914 "state": "completed", 00:09:47.914 "digest": "sha384", 00:09:47.914 "dhgroup": "ffdhe3072" 00:09:47.914 } 00:09:47.914 } 00:09:47.914 ]' 00:09:47.914 19:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:47.914 19:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:09:47.914 19:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:47.914 19:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:09:47.914 19:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:47.914 19:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:47.914 19:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:47.914 19:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:48.172 19:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTViNGY5ZmRiYTg0YmNhNzdmNjY1Y2YxMzdlYjI5MWIzOTZlOGRmYWUxZjdhN2NjEiwkGw==: --dhchap-ctrl-secret DHHC-1:01:OGQ4OGMxZjUwODc2NWI5OTI5OGE4NDgxYmY5NWEwYjk6tdyP: 00:09:48.172 19:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:02:MTViNGY5ZmRiYTg0YmNhNzdmNjY1Y2YxMzdlYjI5MWIzOTZlOGRmYWUxZjdhN2NjEiwkGw==: --dhchap-ctrl-secret DHHC-1:01:OGQ4OGMxZjUwODc2NWI5OTI5OGE4NDgxYmY5NWEwYjk6tdyP: 00:09:48.738 19:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:48.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:48.738 19:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:09:48.738 19:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.738 19:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:48.738 19:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.738 19:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:48.738 19:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:09:48.738 19:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:09:48.996 19:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:09:48.996 19:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:48.996 19:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:09:48.996 19:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:09:48.996 19:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:48.996 19:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:48.996 19:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key3 00:09:48.996 19:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.996 19:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:48.996 19:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.996 19:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:48.996 19:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:48.996 19:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:49.254 00:09:49.254 19:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:49.254 19:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:49.254 19:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:49.545 19:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:49.545 19:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:49.545 19:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.545 19:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:49.545 19:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.545 19:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:49.545 { 00:09:49.545 "cntlid": 71, 00:09:49.545 "qid": 0, 00:09:49.545 "state": "enabled", 00:09:49.545 "thread": "nvmf_tgt_poll_group_000", 00:09:49.545 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:09:49.545 "listen_address": { 00:09:49.545 "trtype": "TCP", 00:09:49.545 "adrfam": "IPv4", 00:09:49.545 "traddr": "10.0.0.3", 00:09:49.545 "trsvcid": "4420" 00:09:49.545 }, 00:09:49.545 "peer_address": { 00:09:49.545 "trtype": "TCP", 00:09:49.545 "adrfam": "IPv4", 00:09:49.545 "traddr": "10.0.0.1", 00:09:49.545 "trsvcid": "52552" 00:09:49.545 }, 00:09:49.545 "auth": { 00:09:49.545 "state": "completed", 00:09:49.545 "digest": "sha384", 00:09:49.545 "dhgroup": "ffdhe3072" 00:09:49.545 } 00:09:49.545 } 00:09:49.545 ]' 00:09:49.545 19:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:49.545 19:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:09:49.545 19:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:49.545 19:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:09:49.545 19:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:49.545 19:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:49.545 19:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:49.545 19:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:49.846 19:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjBhNTE4OThjOWM1MjBjNzA0MjE0MDU1NWU5YWE2MmQyOTZmNWYyMTcwOWZlZmUzMTM2YzdhZmY2ZWYwNDY3NMoRYKs=: 00:09:49.847 19:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:03:ZjBhNTE4OThjOWM1MjBjNzA0MjE0MDU1NWU5YWE2MmQyOTZmNWYyMTcwOWZlZmUzMTM2YzdhZmY2ZWYwNDY3NMoRYKs=: 00:09:50.412 19:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:50.412 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:50.412 19:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:09:50.412 19:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.412 19:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:50.412 19:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.412 19:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:50.412 19:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:50.412 19:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:09:50.412 19:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:09:50.670 19:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:09:50.670 19:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:50.670 19:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:09:50.670 19:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:09:50.670 19:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:50.670 19:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:50.670 19:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:50.670 19:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.670 19:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:50.670 19:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.670 19:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:50.670 19:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:50.670 19:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:50.927 00:09:50.927 19:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:50.927 19:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:50.927 19:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:51.185 19:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:51.185 19:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:51.185 19:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.185 19:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:51.185 19:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.185 19:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:51.185 { 00:09:51.185 "cntlid": 73, 00:09:51.185 "qid": 0, 00:09:51.185 "state": "enabled", 00:09:51.185 "thread": "nvmf_tgt_poll_group_000", 00:09:51.185 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:09:51.185 "listen_address": { 00:09:51.185 "trtype": "TCP", 00:09:51.185 "adrfam": "IPv4", 00:09:51.185 "traddr": "10.0.0.3", 00:09:51.185 "trsvcid": "4420" 00:09:51.185 }, 00:09:51.185 "peer_address": { 00:09:51.185 "trtype": "TCP", 00:09:51.185 "adrfam": "IPv4", 00:09:51.185 "traddr": "10.0.0.1", 00:09:51.185 "trsvcid": "52584" 00:09:51.185 }, 00:09:51.185 "auth": { 00:09:51.185 "state": "completed", 00:09:51.185 "digest": "sha384", 00:09:51.185 "dhgroup": "ffdhe4096" 00:09:51.185 } 00:09:51.185 } 00:09:51.185 ]' 00:09:51.185 19:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:51.185 19:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:09:51.185 19:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:51.185 19:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:09:51.185 19:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:51.185 19:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:51.185 19:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:51.185 19:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:51.444 19:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDY1ZGEwOGRhYjUzZjE5NzI1ZjE1YWM1OWNmMDQ4ZTFjMGUzOTRhZGRlNDU1NmUwwJmTAQ==: --dhchap-ctrl-secret DHHC-1:03:YzgzZWVhZmJmNDNjOTRlODNlNmZhNDZkZGNjZTg0NWRkZjc0NWU0OTRiYWM0NzUwZWIzZjA4MTFkMWY2YTIzOTavnEE=: 00:09:51.444 19:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:00:ZDY1ZGEwOGRhYjUzZjE5NzI1ZjE1YWM1OWNmMDQ4ZTFjMGUzOTRhZGRlNDU1NmUwwJmTAQ==: --dhchap-ctrl-secret DHHC-1:03:YzgzZWVhZmJmNDNjOTRlODNlNmZhNDZkZGNjZTg0NWRkZjc0NWU0OTRiYWM0NzUwZWIzZjA4MTFkMWY2YTIzOTavnEE=: 00:09:52.008 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:52.008 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:52.008 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:09:52.008 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.008 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:52.008 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.008 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:52.008 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:09:52.008 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:09:52.266 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:09:52.266 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:52.266 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:09:52.266 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:09:52.266 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:52.266 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:52.266 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:52.266 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.266 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:52.266 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.266 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:52.266 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:52.266 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:52.546 00:09:52.546 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:52.546 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:52.546 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:52.820 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:52.820 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:52.820 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.820 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:52.820 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.820 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:52.820 { 00:09:52.820 "cntlid": 75, 00:09:52.820 "qid": 0, 00:09:52.820 "state": "enabled", 00:09:52.820 "thread": "nvmf_tgt_poll_group_000", 00:09:52.820 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:09:52.820 "listen_address": { 00:09:52.820 "trtype": "TCP", 00:09:52.820 "adrfam": "IPv4", 00:09:52.820 "traddr": "10.0.0.3", 00:09:52.820 "trsvcid": "4420" 00:09:52.820 }, 00:09:52.820 "peer_address": { 00:09:52.820 "trtype": "TCP", 00:09:52.820 "adrfam": "IPv4", 00:09:52.820 "traddr": "10.0.0.1", 00:09:52.820 "trsvcid": "54936" 00:09:52.820 }, 00:09:52.820 "auth": { 00:09:52.820 "state": "completed", 00:09:52.820 "digest": "sha384", 00:09:52.820 "dhgroup": "ffdhe4096" 00:09:52.820 } 00:09:52.820 } 00:09:52.820 ]' 00:09:52.820 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:52.820 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:09:52.820 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:52.820 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:09:52.820 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:52.820 19:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:52.820 19:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:52.820 19:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:53.077 19:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDU4N2QxNTAwOWNiOTA5MzQxZTYxOTA2YzRmYjMxM2Jh56XD: --dhchap-ctrl-secret DHHC-1:02:NmJlYjViMjQzYTBhYjVlMTQ1YWZjMTNhM2M3YmFjMTNlZGY3YTg4YjYwMGY1NjQzMW7RBQ==: 00:09:53.078 19:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:01:NDU4N2QxNTAwOWNiOTA5MzQxZTYxOTA2YzRmYjMxM2Jh56XD: --dhchap-ctrl-secret DHHC-1:02:NmJlYjViMjQzYTBhYjVlMTQ1YWZjMTNhM2M3YmFjMTNlZGY3YTg4YjYwMGY1NjQzMW7RBQ==: 00:09:53.643 19:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:53.643 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:53.643 19:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:09:53.643 19:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.643 19:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:53.643 19:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.643 19:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:53.643 19:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:09:53.643 19:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:09:53.901 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:09:53.901 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:53.901 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:09:53.901 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:09:53.901 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:53.901 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:53.901 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:53.901 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.901 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:53.901 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.901 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:53.901 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:53.901 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:54.158 00:09:54.158 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:54.158 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:54.158 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:54.414 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:54.414 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:54.415 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.415 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:54.415 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.415 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:54.415 { 00:09:54.415 "cntlid": 77, 00:09:54.415 "qid": 0, 00:09:54.415 "state": "enabled", 00:09:54.415 "thread": "nvmf_tgt_poll_group_000", 00:09:54.415 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:09:54.415 "listen_address": { 00:09:54.415 "trtype": "TCP", 00:09:54.415 "adrfam": "IPv4", 00:09:54.415 "traddr": "10.0.0.3", 00:09:54.415 "trsvcid": "4420" 00:09:54.415 }, 00:09:54.415 "peer_address": { 00:09:54.415 "trtype": "TCP", 00:09:54.415 "adrfam": "IPv4", 00:09:54.415 "traddr": "10.0.0.1", 00:09:54.415 "trsvcid": "54960" 00:09:54.415 }, 00:09:54.415 "auth": { 00:09:54.415 "state": "completed", 00:09:54.415 "digest": "sha384", 00:09:54.415 "dhgroup": "ffdhe4096" 00:09:54.415 } 00:09:54.415 } 00:09:54.415 ]' 00:09:54.415 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:54.415 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:09:54.415 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:54.415 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:09:54.415 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:54.415 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:54.415 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:54.415 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:54.673 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTViNGY5ZmRiYTg0YmNhNzdmNjY1Y2YxMzdlYjI5MWIzOTZlOGRmYWUxZjdhN2NjEiwkGw==: --dhchap-ctrl-secret DHHC-1:01:OGQ4OGMxZjUwODc2NWI5OTI5OGE4NDgxYmY5NWEwYjk6tdyP: 00:09:54.673 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:02:MTViNGY5ZmRiYTg0YmNhNzdmNjY1Y2YxMzdlYjI5MWIzOTZlOGRmYWUxZjdhN2NjEiwkGw==: --dhchap-ctrl-secret DHHC-1:01:OGQ4OGMxZjUwODc2NWI5OTI5OGE4NDgxYmY5NWEwYjk6tdyP: 00:09:55.238 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:55.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:55.238 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:09:55.238 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.238 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:55.238 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.238 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:55.238 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:09:55.238 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:09:55.540 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:09:55.540 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:55.540 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:09:55.540 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:09:55.540 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:55.540 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:55.540 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key3 00:09:55.540 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.540 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:55.540 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.540 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:55.540 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:55.540 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:55.801 00:09:55.801 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:55.801 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:55.801 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:56.060 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:56.060 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:56.060 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.060 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:56.060 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.060 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:56.060 { 00:09:56.060 "cntlid": 79, 00:09:56.060 "qid": 0, 00:09:56.060 "state": "enabled", 00:09:56.060 "thread": "nvmf_tgt_poll_group_000", 00:09:56.060 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:09:56.060 "listen_address": { 00:09:56.060 "trtype": "TCP", 00:09:56.060 "adrfam": "IPv4", 00:09:56.060 "traddr": "10.0.0.3", 00:09:56.060 "trsvcid": "4420" 00:09:56.060 }, 00:09:56.060 "peer_address": { 00:09:56.060 "trtype": "TCP", 00:09:56.060 "adrfam": "IPv4", 00:09:56.060 "traddr": "10.0.0.1", 00:09:56.060 "trsvcid": "54990" 00:09:56.060 }, 00:09:56.060 "auth": { 00:09:56.060 "state": "completed", 00:09:56.060 "digest": "sha384", 00:09:56.060 "dhgroup": "ffdhe4096" 00:09:56.060 } 00:09:56.060 } 00:09:56.060 ]' 00:09:56.060 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:56.060 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:09:56.060 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:56.060 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:09:56.318 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:56.318 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:56.318 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:56.318 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:56.576 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjBhNTE4OThjOWM1MjBjNzA0MjE0MDU1NWU5YWE2MmQyOTZmNWYyMTcwOWZlZmUzMTM2YzdhZmY2ZWYwNDY3NMoRYKs=: 00:09:56.576 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:03:ZjBhNTE4OThjOWM1MjBjNzA0MjE0MDU1NWU5YWE2MmQyOTZmNWYyMTcwOWZlZmUzMTM2YzdhZmY2ZWYwNDY3NMoRYKs=: 00:09:57.142 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:57.142 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:57.142 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:09:57.142 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.142 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.142 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.142 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:57.142 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:57.142 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:09:57.142 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:09:57.142 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:09:57.142 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:57.142 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:09:57.142 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:09:57.142 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:57.142 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:57.142 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:57.142 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.142 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.142 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.142 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:57.142 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:57.142 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:57.705 00:09:57.705 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:57.705 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:57.705 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:57.964 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:57.964 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:57.964 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.964 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.964 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.964 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:57.964 { 00:09:57.964 "cntlid": 81, 00:09:57.964 "qid": 0, 00:09:57.964 "state": "enabled", 00:09:57.964 "thread": "nvmf_tgt_poll_group_000", 00:09:57.964 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:09:57.964 "listen_address": { 00:09:57.964 "trtype": "TCP", 00:09:57.964 "adrfam": "IPv4", 00:09:57.964 "traddr": "10.0.0.3", 00:09:57.964 "trsvcid": "4420" 00:09:57.964 }, 00:09:57.964 "peer_address": { 00:09:57.964 "trtype": "TCP", 00:09:57.964 "adrfam": "IPv4", 00:09:57.964 "traddr": "10.0.0.1", 00:09:57.964 "trsvcid": "55036" 00:09:57.964 }, 00:09:57.964 "auth": { 00:09:57.964 "state": "completed", 00:09:57.964 "digest": "sha384", 00:09:57.964 "dhgroup": "ffdhe6144" 00:09:57.964 } 00:09:57.964 } 00:09:57.964 ]' 00:09:57.964 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:57.964 19:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:09:57.964 19:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:57.964 19:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:09:57.964 19:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:57.964 19:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:57.964 19:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:57.964 19:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:58.222 19:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDY1ZGEwOGRhYjUzZjE5NzI1ZjE1YWM1OWNmMDQ4ZTFjMGUzOTRhZGRlNDU1NmUwwJmTAQ==: --dhchap-ctrl-secret DHHC-1:03:YzgzZWVhZmJmNDNjOTRlODNlNmZhNDZkZGNjZTg0NWRkZjc0NWU0OTRiYWM0NzUwZWIzZjA4MTFkMWY2YTIzOTavnEE=: 00:09:58.222 19:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:00:ZDY1ZGEwOGRhYjUzZjE5NzI1ZjE1YWM1OWNmMDQ4ZTFjMGUzOTRhZGRlNDU1NmUwwJmTAQ==: --dhchap-ctrl-secret DHHC-1:03:YzgzZWVhZmJmNDNjOTRlODNlNmZhNDZkZGNjZTg0NWRkZjc0NWU0OTRiYWM0NzUwZWIzZjA4MTFkMWY2YTIzOTavnEE=: 00:09:58.858 19:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:58.858 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:58.858 19:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:09:58.858 19:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.858 19:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:58.858 19:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.858 19:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:58.858 19:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:09:58.858 19:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:09:59.115 19:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:09:59.115 19:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:59.115 19:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:09:59.115 19:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:09:59.115 19:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:59.115 19:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:59.115 19:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:59.115 19:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.115 19:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.115 19:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.115 19:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:59.116 19:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:59.116 19:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:59.373 00:09:59.373 19:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:59.373 19:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:59.373 19:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:59.630 19:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:59.630 19:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:59.630 19:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.631 19:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.631 19:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.631 19:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:59.631 { 00:09:59.631 "cntlid": 83, 00:09:59.631 "qid": 0, 00:09:59.631 "state": "enabled", 00:09:59.631 "thread": "nvmf_tgt_poll_group_000", 00:09:59.631 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:09:59.631 "listen_address": { 00:09:59.631 "trtype": "TCP", 00:09:59.631 "adrfam": "IPv4", 00:09:59.631 "traddr": "10.0.0.3", 00:09:59.631 "trsvcid": "4420" 00:09:59.631 }, 00:09:59.631 "peer_address": { 00:09:59.631 "trtype": "TCP", 00:09:59.631 "adrfam": "IPv4", 00:09:59.631 "traddr": "10.0.0.1", 00:09:59.631 "trsvcid": "55044" 00:09:59.631 }, 00:09:59.631 "auth": { 00:09:59.631 "state": "completed", 00:09:59.631 "digest": "sha384", 00:09:59.631 "dhgroup": "ffdhe6144" 00:09:59.631 } 00:09:59.631 } 00:09:59.631 ]' 00:09:59.631 19:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:59.631 19:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:09:59.631 19:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:59.631 19:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:09:59.631 19:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:59.889 19:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:59.889 19:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:59.889 19:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:59.889 19:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDU4N2QxNTAwOWNiOTA5MzQxZTYxOTA2YzRmYjMxM2Jh56XD: --dhchap-ctrl-secret DHHC-1:02:NmJlYjViMjQzYTBhYjVlMTQ1YWZjMTNhM2M3YmFjMTNlZGY3YTg4YjYwMGY1NjQzMW7RBQ==: 00:09:59.889 19:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:01:NDU4N2QxNTAwOWNiOTA5MzQxZTYxOTA2YzRmYjMxM2Jh56XD: --dhchap-ctrl-secret DHHC-1:02:NmJlYjViMjQzYTBhYjVlMTQ1YWZjMTNhM2M3YmFjMTNlZGY3YTg4YjYwMGY1NjQzMW7RBQ==: 00:10:00.454 19:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:00.454 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:00.454 19:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:10:00.454 19:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.454 19:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.454 19:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.454 19:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:00.454 19:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:00.455 19:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:00.712 19:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:10:00.712 19:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:00.712 19:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:00.712 19:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:00.712 19:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:00.712 19:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:00.712 19:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:00.712 19:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.712 19:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.712 19:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.712 19:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:00.712 19:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:00.712 19:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:01.277 00:10:01.277 19:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:01.277 19:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:01.277 19:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:01.277 19:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:01.277 19:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:01.277 19:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.277 19:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.277 19:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.277 19:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:01.277 { 00:10:01.277 "cntlid": 85, 00:10:01.277 "qid": 0, 00:10:01.277 "state": "enabled", 00:10:01.277 "thread": "nvmf_tgt_poll_group_000", 00:10:01.277 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:10:01.277 "listen_address": { 00:10:01.277 "trtype": "TCP", 00:10:01.277 "adrfam": "IPv4", 00:10:01.277 "traddr": "10.0.0.3", 00:10:01.277 "trsvcid": "4420" 00:10:01.277 }, 00:10:01.277 "peer_address": { 00:10:01.277 "trtype": "TCP", 00:10:01.277 "adrfam": "IPv4", 00:10:01.277 "traddr": "10.0.0.1", 00:10:01.277 "trsvcid": "55080" 00:10:01.277 }, 00:10:01.277 "auth": { 00:10:01.277 "state": "completed", 00:10:01.277 "digest": "sha384", 00:10:01.277 "dhgroup": "ffdhe6144" 00:10:01.277 } 00:10:01.277 } 00:10:01.277 ]' 00:10:01.277 19:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:01.534 19:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:01.534 19:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:01.534 19:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:01.534 19:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:01.534 19:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:01.534 19:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:01.534 19:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:01.791 19:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTViNGY5ZmRiYTg0YmNhNzdmNjY1Y2YxMzdlYjI5MWIzOTZlOGRmYWUxZjdhN2NjEiwkGw==: --dhchap-ctrl-secret DHHC-1:01:OGQ4OGMxZjUwODc2NWI5OTI5OGE4NDgxYmY5NWEwYjk6tdyP: 00:10:01.791 19:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:02:MTViNGY5ZmRiYTg0YmNhNzdmNjY1Y2YxMzdlYjI5MWIzOTZlOGRmYWUxZjdhN2NjEiwkGw==: --dhchap-ctrl-secret DHHC-1:01:OGQ4OGMxZjUwODc2NWI5OTI5OGE4NDgxYmY5NWEwYjk6tdyP: 00:10:02.355 19:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:02.355 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:02.355 19:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:10:02.355 19:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.355 19:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.355 19:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.355 19:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:02.355 19:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:02.355 19:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:02.355 19:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:10:02.355 19:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:02.355 19:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:02.355 19:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:02.355 19:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:02.355 19:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:02.355 19:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key3 00:10:02.355 19:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.355 19:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.355 19:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.355 19:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:02.355 19:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:02.355 19:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:02.921 00:10:02.921 19:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:02.921 19:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:02.921 19:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:03.179 19:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:03.179 19:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:03.179 19:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.179 19:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:03.179 19:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.179 19:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:03.179 { 00:10:03.179 "cntlid": 87, 00:10:03.179 "qid": 0, 00:10:03.179 "state": "enabled", 00:10:03.179 "thread": "nvmf_tgt_poll_group_000", 00:10:03.179 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:10:03.179 "listen_address": { 00:10:03.179 "trtype": "TCP", 00:10:03.179 "adrfam": "IPv4", 00:10:03.179 "traddr": "10.0.0.3", 00:10:03.179 "trsvcid": "4420" 00:10:03.179 }, 00:10:03.179 "peer_address": { 00:10:03.179 "trtype": "TCP", 00:10:03.179 "adrfam": "IPv4", 00:10:03.179 "traddr": "10.0.0.1", 00:10:03.179 "trsvcid": "53018" 00:10:03.179 }, 00:10:03.179 "auth": { 00:10:03.179 "state": "completed", 00:10:03.179 "digest": "sha384", 00:10:03.179 "dhgroup": "ffdhe6144" 00:10:03.179 } 00:10:03.179 } 00:10:03.179 ]' 00:10:03.179 19:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:03.179 19:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:03.179 19:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:03.179 19:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:03.179 19:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:03.179 19:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:03.179 19:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:03.179 19:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:03.437 19:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjBhNTE4OThjOWM1MjBjNzA0MjE0MDU1NWU5YWE2MmQyOTZmNWYyMTcwOWZlZmUzMTM2YzdhZmY2ZWYwNDY3NMoRYKs=: 00:10:03.437 19:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:03:ZjBhNTE4OThjOWM1MjBjNzA0MjE0MDU1NWU5YWE2MmQyOTZmNWYyMTcwOWZlZmUzMTM2YzdhZmY2ZWYwNDY3NMoRYKs=: 00:10:04.004 19:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:04.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:04.004 19:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:10:04.004 19:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.004 19:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:04.004 19:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.004 19:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:04.004 19:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:04.004 19:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:04.004 19:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:04.004 19:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:10:04.004 19:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:04.005 19:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:04.005 19:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:04.005 19:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:04.005 19:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:04.005 19:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:04.005 19:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.005 19:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:04.005 19:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.005 19:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:04.005 19:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:04.005 19:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:04.573 00:10:04.573 19:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:04.573 19:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:04.573 19:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:04.833 19:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:04.833 19:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:04.833 19:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.833 19:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:04.833 19:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.833 19:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:04.833 { 00:10:04.833 "cntlid": 89, 00:10:04.833 "qid": 0, 00:10:04.833 "state": "enabled", 00:10:04.833 "thread": "nvmf_tgt_poll_group_000", 00:10:04.833 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:10:04.833 "listen_address": { 00:10:04.833 "trtype": "TCP", 00:10:04.833 "adrfam": "IPv4", 00:10:04.833 "traddr": "10.0.0.3", 00:10:04.833 "trsvcid": "4420" 00:10:04.833 }, 00:10:04.833 "peer_address": { 00:10:04.833 "trtype": "TCP", 00:10:04.833 "adrfam": "IPv4", 00:10:04.833 "traddr": "10.0.0.1", 00:10:04.833 "trsvcid": "53046" 00:10:04.833 }, 00:10:04.833 "auth": { 00:10:04.833 "state": "completed", 00:10:04.833 "digest": "sha384", 00:10:04.833 "dhgroup": "ffdhe8192" 00:10:04.833 } 00:10:04.833 } 00:10:04.833 ]' 00:10:04.833 19:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:04.833 19:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:04.833 19:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:04.833 19:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:04.833 19:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:04.833 19:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:04.833 19:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:04.833 19:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:05.092 19:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDY1ZGEwOGRhYjUzZjE5NzI1ZjE1YWM1OWNmMDQ4ZTFjMGUzOTRhZGRlNDU1NmUwwJmTAQ==: --dhchap-ctrl-secret DHHC-1:03:YzgzZWVhZmJmNDNjOTRlODNlNmZhNDZkZGNjZTg0NWRkZjc0NWU0OTRiYWM0NzUwZWIzZjA4MTFkMWY2YTIzOTavnEE=: 00:10:05.092 19:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:00:ZDY1ZGEwOGRhYjUzZjE5NzI1ZjE1YWM1OWNmMDQ4ZTFjMGUzOTRhZGRlNDU1NmUwwJmTAQ==: --dhchap-ctrl-secret DHHC-1:03:YzgzZWVhZmJmNDNjOTRlODNlNmZhNDZkZGNjZTg0NWRkZjc0NWU0OTRiYWM0NzUwZWIzZjA4MTFkMWY2YTIzOTavnEE=: 00:10:05.707 19:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:05.707 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:05.707 19:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:10:05.707 19:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.707 19:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:05.707 19:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.707 19:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:05.707 19:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:05.707 19:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:05.965 19:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:10:05.965 19:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:05.965 19:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:05.965 19:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:05.965 19:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:05.965 19:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:05.965 19:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:05.965 19:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.965 19:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:05.965 19:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.965 19:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:05.965 19:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:05.966 19:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:06.529 00:10:06.529 19:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:06.529 19:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:06.529 19:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:06.786 19:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:06.786 19:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:06.786 19:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.786 19:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:06.786 19:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.786 19:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:06.786 { 00:10:06.786 "cntlid": 91, 00:10:06.786 "qid": 0, 00:10:06.786 "state": "enabled", 00:10:06.786 "thread": "nvmf_tgt_poll_group_000", 00:10:06.786 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:10:06.786 "listen_address": { 00:10:06.786 "trtype": "TCP", 00:10:06.786 "adrfam": "IPv4", 00:10:06.786 "traddr": "10.0.0.3", 00:10:06.786 "trsvcid": "4420" 00:10:06.786 }, 00:10:06.786 "peer_address": { 00:10:06.786 "trtype": "TCP", 00:10:06.786 "adrfam": "IPv4", 00:10:06.786 "traddr": "10.0.0.1", 00:10:06.786 "trsvcid": "53058" 00:10:06.786 }, 00:10:06.786 "auth": { 00:10:06.786 "state": "completed", 00:10:06.786 "digest": "sha384", 00:10:06.786 "dhgroup": "ffdhe8192" 00:10:06.786 } 00:10:06.786 } 00:10:06.786 ]' 00:10:06.786 19:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:06.786 19:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:06.786 19:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:06.786 19:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:06.786 19:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:06.786 19:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:06.786 19:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:06.786 19:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:07.042 19:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDU4N2QxNTAwOWNiOTA5MzQxZTYxOTA2YzRmYjMxM2Jh56XD: --dhchap-ctrl-secret DHHC-1:02:NmJlYjViMjQzYTBhYjVlMTQ1YWZjMTNhM2M3YmFjMTNlZGY3YTg4YjYwMGY1NjQzMW7RBQ==: 00:10:07.042 19:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:01:NDU4N2QxNTAwOWNiOTA5MzQxZTYxOTA2YzRmYjMxM2Jh56XD: --dhchap-ctrl-secret DHHC-1:02:NmJlYjViMjQzYTBhYjVlMTQ1YWZjMTNhM2M3YmFjMTNlZGY3YTg4YjYwMGY1NjQzMW7RBQ==: 00:10:07.656 19:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:07.656 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:07.656 19:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:10:07.656 19:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.656 19:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:07.656 19:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.656 19:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:07.656 19:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:07.656 19:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:07.915 19:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:10:07.915 19:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:07.915 19:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:07.915 19:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:07.915 19:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:07.915 19:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:07.915 19:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:07.915 19:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.915 19:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:07.915 19:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.915 19:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:07.915 19:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:07.915 19:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:08.479 00:10:08.479 19:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:08.479 19:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:08.479 19:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:08.479 19:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:08.479 19:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:08.479 19:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.479 19:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.479 19:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.479 19:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:08.479 { 00:10:08.479 "cntlid": 93, 00:10:08.479 "qid": 0, 00:10:08.479 "state": "enabled", 00:10:08.479 "thread": "nvmf_tgt_poll_group_000", 00:10:08.479 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:10:08.479 "listen_address": { 00:10:08.479 "trtype": "TCP", 00:10:08.479 "adrfam": "IPv4", 00:10:08.479 "traddr": "10.0.0.3", 00:10:08.479 "trsvcid": "4420" 00:10:08.479 }, 00:10:08.479 "peer_address": { 00:10:08.479 "trtype": "TCP", 00:10:08.479 "adrfam": "IPv4", 00:10:08.479 "traddr": "10.0.0.1", 00:10:08.479 "trsvcid": "53088" 00:10:08.479 }, 00:10:08.479 "auth": { 00:10:08.479 "state": "completed", 00:10:08.479 "digest": "sha384", 00:10:08.479 "dhgroup": "ffdhe8192" 00:10:08.479 } 00:10:08.479 } 00:10:08.479 ]' 00:10:08.479 19:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:08.479 19:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:08.479 19:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:08.736 19:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:08.736 19:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:08.736 19:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:08.736 19:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:08.736 19:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:08.993 19:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTViNGY5ZmRiYTg0YmNhNzdmNjY1Y2YxMzdlYjI5MWIzOTZlOGRmYWUxZjdhN2NjEiwkGw==: --dhchap-ctrl-secret DHHC-1:01:OGQ4OGMxZjUwODc2NWI5OTI5OGE4NDgxYmY5NWEwYjk6tdyP: 00:10:08.993 19:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:02:MTViNGY5ZmRiYTg0YmNhNzdmNjY1Y2YxMzdlYjI5MWIzOTZlOGRmYWUxZjdhN2NjEiwkGw==: --dhchap-ctrl-secret DHHC-1:01:OGQ4OGMxZjUwODc2NWI5OTI5OGE4NDgxYmY5NWEwYjk6tdyP: 00:10:09.557 19:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:09.557 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:09.557 19:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:10:09.557 19:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.557 19:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.557 19:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.557 19:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:09.557 19:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:09.557 19:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:09.557 19:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:10:09.557 19:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:09.557 19:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:09.557 19:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:09.557 19:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:09.557 19:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:09.557 19:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key3 00:10:09.557 19:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.557 19:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.814 19:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.815 19:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:09.815 19:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:09.815 19:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:10.072 00:10:10.329 19:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:10.329 19:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:10.329 19:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:10.329 19:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:10.329 19:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:10.329 19:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.329 19:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.329 19:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.329 19:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:10.329 { 00:10:10.329 "cntlid": 95, 00:10:10.329 "qid": 0, 00:10:10.329 "state": "enabled", 00:10:10.329 "thread": "nvmf_tgt_poll_group_000", 00:10:10.329 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:10:10.329 "listen_address": { 00:10:10.329 "trtype": "TCP", 00:10:10.329 "adrfam": "IPv4", 00:10:10.329 "traddr": "10.0.0.3", 00:10:10.329 "trsvcid": "4420" 00:10:10.329 }, 00:10:10.329 "peer_address": { 00:10:10.329 "trtype": "TCP", 00:10:10.329 "adrfam": "IPv4", 00:10:10.329 "traddr": "10.0.0.1", 00:10:10.329 "trsvcid": "53112" 00:10:10.329 }, 00:10:10.329 "auth": { 00:10:10.329 "state": "completed", 00:10:10.329 "digest": "sha384", 00:10:10.329 "dhgroup": "ffdhe8192" 00:10:10.329 } 00:10:10.329 } 00:10:10.329 ]' 00:10:10.329 19:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:10.586 19:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:10.586 19:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:10.586 19:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:10.586 19:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:10.586 19:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:10.586 19:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:10.586 19:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:10.843 19:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjBhNTE4OThjOWM1MjBjNzA0MjE0MDU1NWU5YWE2MmQyOTZmNWYyMTcwOWZlZmUzMTM2YzdhZmY2ZWYwNDY3NMoRYKs=: 00:10:10.843 19:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:03:ZjBhNTE4OThjOWM1MjBjNzA0MjE0MDU1NWU5YWE2MmQyOTZmNWYyMTcwOWZlZmUzMTM2YzdhZmY2ZWYwNDY3NMoRYKs=: 00:10:11.409 19:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:11.409 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:11.409 19:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:10:11.409 19:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.409 19:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.409 19:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.409 19:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:10:11.409 19:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:11.409 19:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:11.409 19:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:10:11.409 19:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:10:11.409 19:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:10:11.409 19:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:11.409 19:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:11.409 19:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:11.409 19:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:11.409 19:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:11.409 19:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:11.409 19:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.409 19:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.668 19:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.668 19:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:11.668 19:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:11.668 19:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:11.668 00:10:11.925 19:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:11.925 19:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:11.925 19:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:11.925 19:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:11.925 19:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:11.925 19:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.925 19:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.925 19:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.925 19:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:11.925 { 00:10:11.925 "cntlid": 97, 00:10:11.925 "qid": 0, 00:10:11.925 "state": "enabled", 00:10:11.925 "thread": "nvmf_tgt_poll_group_000", 00:10:11.925 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:10:11.925 "listen_address": { 00:10:11.925 "trtype": "TCP", 00:10:11.925 "adrfam": "IPv4", 00:10:11.925 "traddr": "10.0.0.3", 00:10:11.925 "trsvcid": "4420" 00:10:11.925 }, 00:10:11.925 "peer_address": { 00:10:11.925 "trtype": "TCP", 00:10:11.925 "adrfam": "IPv4", 00:10:11.925 "traddr": "10.0.0.1", 00:10:11.925 "trsvcid": "53142" 00:10:11.925 }, 00:10:11.925 "auth": { 00:10:11.925 "state": "completed", 00:10:11.925 "digest": "sha512", 00:10:11.925 "dhgroup": "null" 00:10:11.925 } 00:10:11.925 } 00:10:11.925 ]' 00:10:11.925 19:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:12.183 19:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:12.183 19:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:12.183 19:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:12.183 19:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:12.183 19:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:12.183 19:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:12.183 19:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:12.441 19:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDY1ZGEwOGRhYjUzZjE5NzI1ZjE1YWM1OWNmMDQ4ZTFjMGUzOTRhZGRlNDU1NmUwwJmTAQ==: --dhchap-ctrl-secret DHHC-1:03:YzgzZWVhZmJmNDNjOTRlODNlNmZhNDZkZGNjZTg0NWRkZjc0NWU0OTRiYWM0NzUwZWIzZjA4MTFkMWY2YTIzOTavnEE=: 00:10:12.441 19:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:00:ZDY1ZGEwOGRhYjUzZjE5NzI1ZjE1YWM1OWNmMDQ4ZTFjMGUzOTRhZGRlNDU1NmUwwJmTAQ==: --dhchap-ctrl-secret DHHC-1:03:YzgzZWVhZmJmNDNjOTRlODNlNmZhNDZkZGNjZTg0NWRkZjc0NWU0OTRiYWM0NzUwZWIzZjA4MTFkMWY2YTIzOTavnEE=: 00:10:13.006 19:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:13.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:13.006 19:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:10:13.006 19:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.006 19:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.006 19:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.006 19:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:13.006 19:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:10:13.007 19:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:10:13.007 19:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:10:13.007 19:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:13.007 19:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:13.007 19:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:13.007 19:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:13.007 19:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:13.007 19:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:13.007 19:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.007 19:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.265 19:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.265 19:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:13.265 19:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:13.265 19:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:13.523 00:10:13.523 19:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:13.523 19:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:13.523 19:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:13.523 19:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:13.523 19:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:13.523 19:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.523 19:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.523 19:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.523 19:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:13.523 { 00:10:13.523 "cntlid": 99, 00:10:13.523 "qid": 0, 00:10:13.523 "state": "enabled", 00:10:13.523 "thread": "nvmf_tgt_poll_group_000", 00:10:13.523 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:10:13.523 "listen_address": { 00:10:13.523 "trtype": "TCP", 00:10:13.523 "adrfam": "IPv4", 00:10:13.523 "traddr": "10.0.0.3", 00:10:13.523 "trsvcid": "4420" 00:10:13.523 }, 00:10:13.523 "peer_address": { 00:10:13.523 "trtype": "TCP", 00:10:13.523 "adrfam": "IPv4", 00:10:13.523 "traddr": "10.0.0.1", 00:10:13.523 "trsvcid": "36350" 00:10:13.523 }, 00:10:13.523 "auth": { 00:10:13.523 "state": "completed", 00:10:13.523 "digest": "sha512", 00:10:13.523 "dhgroup": "null" 00:10:13.523 } 00:10:13.523 } 00:10:13.523 ]' 00:10:13.523 19:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:13.781 19:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:13.781 19:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:13.781 19:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:13.781 19:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:13.781 19:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:13.781 19:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:13.781 19:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:14.154 19:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDU4N2QxNTAwOWNiOTA5MzQxZTYxOTA2YzRmYjMxM2Jh56XD: --dhchap-ctrl-secret DHHC-1:02:NmJlYjViMjQzYTBhYjVlMTQ1YWZjMTNhM2M3YmFjMTNlZGY3YTg4YjYwMGY1NjQzMW7RBQ==: 00:10:14.154 19:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:01:NDU4N2QxNTAwOWNiOTA5MzQxZTYxOTA2YzRmYjMxM2Jh56XD: --dhchap-ctrl-secret DHHC-1:02:NmJlYjViMjQzYTBhYjVlMTQ1YWZjMTNhM2M3YmFjMTNlZGY3YTg4YjYwMGY1NjQzMW7RBQ==: 00:10:14.411 19:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:14.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:14.411 19:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:10:14.411 19:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.411 19:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:14.411 19:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.411 19:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:14.411 19:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:10:14.411 19:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:10:14.669 19:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:10:14.669 19:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:14.669 19:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:14.669 19:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:14.669 19:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:14.669 19:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:14.669 19:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:14.669 19:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.669 19:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:14.669 19:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.669 19:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:14.669 19:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:14.669 19:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:14.927 00:10:14.927 19:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:14.927 19:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:14.927 19:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:15.186 19:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:15.186 19:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:15.186 19:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.186 19:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.186 19:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.186 19:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:15.186 { 00:10:15.186 "cntlid": 101, 00:10:15.186 "qid": 0, 00:10:15.186 "state": "enabled", 00:10:15.186 "thread": "nvmf_tgt_poll_group_000", 00:10:15.186 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:10:15.186 "listen_address": { 00:10:15.186 "trtype": "TCP", 00:10:15.186 "adrfam": "IPv4", 00:10:15.186 "traddr": "10.0.0.3", 00:10:15.186 "trsvcid": "4420" 00:10:15.186 }, 00:10:15.186 "peer_address": { 00:10:15.186 "trtype": "TCP", 00:10:15.186 "adrfam": "IPv4", 00:10:15.186 "traddr": "10.0.0.1", 00:10:15.186 "trsvcid": "36372" 00:10:15.186 }, 00:10:15.186 "auth": { 00:10:15.186 "state": "completed", 00:10:15.186 "digest": "sha512", 00:10:15.186 "dhgroup": "null" 00:10:15.186 } 00:10:15.186 } 00:10:15.186 ]' 00:10:15.186 19:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:15.186 19:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:15.186 19:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:15.186 19:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:15.186 19:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:15.186 19:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:15.186 19:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:15.186 19:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:15.444 19:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTViNGY5ZmRiYTg0YmNhNzdmNjY1Y2YxMzdlYjI5MWIzOTZlOGRmYWUxZjdhN2NjEiwkGw==: --dhchap-ctrl-secret DHHC-1:01:OGQ4OGMxZjUwODc2NWI5OTI5OGE4NDgxYmY5NWEwYjk6tdyP: 00:10:15.444 19:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:02:MTViNGY5ZmRiYTg0YmNhNzdmNjY1Y2YxMzdlYjI5MWIzOTZlOGRmYWUxZjdhN2NjEiwkGw==: --dhchap-ctrl-secret DHHC-1:01:OGQ4OGMxZjUwODc2NWI5OTI5OGE4NDgxYmY5NWEwYjk6tdyP: 00:10:16.009 19:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:16.009 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:16.009 19:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:10:16.009 19:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.009 19:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.009 19:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.009 19:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:16.009 19:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:10:16.009 19:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:10:16.266 19:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:10:16.266 19:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:16.266 19:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:16.266 19:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:16.266 19:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:16.266 19:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:16.266 19:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key3 00:10:16.266 19:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.266 19:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.524 19:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.524 19:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:16.524 19:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:16.524 19:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:16.782 00:10:16.782 19:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:16.782 19:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:16.782 19:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:16.782 19:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:16.782 19:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:16.782 19:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.782 19:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.782 19:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.782 19:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:16.782 { 00:10:16.782 "cntlid": 103, 00:10:16.782 "qid": 0, 00:10:16.782 "state": "enabled", 00:10:16.782 "thread": "nvmf_tgt_poll_group_000", 00:10:16.782 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:10:16.782 "listen_address": { 00:10:16.782 "trtype": "TCP", 00:10:16.782 "adrfam": "IPv4", 00:10:16.782 "traddr": "10.0.0.3", 00:10:16.782 "trsvcid": "4420" 00:10:16.782 }, 00:10:16.782 "peer_address": { 00:10:16.782 "trtype": "TCP", 00:10:16.782 "adrfam": "IPv4", 00:10:16.782 "traddr": "10.0.0.1", 00:10:16.782 "trsvcid": "36392" 00:10:16.782 }, 00:10:16.782 "auth": { 00:10:16.782 "state": "completed", 00:10:16.782 "digest": "sha512", 00:10:16.782 "dhgroup": "null" 00:10:16.782 } 00:10:16.782 } 00:10:16.782 ]' 00:10:16.782 19:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:17.039 19:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:17.039 19:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:17.039 19:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:17.039 19:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:17.039 19:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:17.039 19:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:17.039 19:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:17.298 19:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjBhNTE4OThjOWM1MjBjNzA0MjE0MDU1NWU5YWE2MmQyOTZmNWYyMTcwOWZlZmUzMTM2YzdhZmY2ZWYwNDY3NMoRYKs=: 00:10:17.298 19:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:03:ZjBhNTE4OThjOWM1MjBjNzA0MjE0MDU1NWU5YWE2MmQyOTZmNWYyMTcwOWZlZmUzMTM2YzdhZmY2ZWYwNDY3NMoRYKs=: 00:10:17.863 19:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:17.863 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:17.863 19:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:10:17.863 19:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.863 19:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.863 19:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.863 19:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:17.863 19:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:17.863 19:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:10:17.863 19:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:10:18.186 19:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:10:18.186 19:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:18.186 19:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:18.186 19:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:18.186 19:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:18.186 19:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:18.186 19:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:18.186 19:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.186 19:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.186 19:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.186 19:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:18.186 19:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:18.186 19:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:18.186 00:10:18.186 19:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:18.186 19:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:18.186 19:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:18.444 19:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:18.444 19:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:18.444 19:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.444 19:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.444 19:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.444 19:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:18.444 { 00:10:18.444 "cntlid": 105, 00:10:18.444 "qid": 0, 00:10:18.444 "state": "enabled", 00:10:18.444 "thread": "nvmf_tgt_poll_group_000", 00:10:18.444 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:10:18.444 "listen_address": { 00:10:18.444 "trtype": "TCP", 00:10:18.444 "adrfam": "IPv4", 00:10:18.444 "traddr": "10.0.0.3", 00:10:18.444 "trsvcid": "4420" 00:10:18.444 }, 00:10:18.444 "peer_address": { 00:10:18.444 "trtype": "TCP", 00:10:18.444 "adrfam": "IPv4", 00:10:18.444 "traddr": "10.0.0.1", 00:10:18.444 "trsvcid": "36410" 00:10:18.444 }, 00:10:18.444 "auth": { 00:10:18.444 "state": "completed", 00:10:18.444 "digest": "sha512", 00:10:18.444 "dhgroup": "ffdhe2048" 00:10:18.444 } 00:10:18.444 } 00:10:18.444 ]' 00:10:18.444 19:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:18.444 19:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:18.444 19:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:18.703 19:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:18.703 19:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:18.703 19:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:18.703 19:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:18.703 19:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:18.961 19:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDY1ZGEwOGRhYjUzZjE5NzI1ZjE1YWM1OWNmMDQ4ZTFjMGUzOTRhZGRlNDU1NmUwwJmTAQ==: --dhchap-ctrl-secret DHHC-1:03:YzgzZWVhZmJmNDNjOTRlODNlNmZhNDZkZGNjZTg0NWRkZjc0NWU0OTRiYWM0NzUwZWIzZjA4MTFkMWY2YTIzOTavnEE=: 00:10:18.961 19:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:00:ZDY1ZGEwOGRhYjUzZjE5NzI1ZjE1YWM1OWNmMDQ4ZTFjMGUzOTRhZGRlNDU1NmUwwJmTAQ==: --dhchap-ctrl-secret DHHC-1:03:YzgzZWVhZmJmNDNjOTRlODNlNmZhNDZkZGNjZTg0NWRkZjc0NWU0OTRiYWM0NzUwZWIzZjA4MTFkMWY2YTIzOTavnEE=: 00:10:19.528 19:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:19.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:19.528 19:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:10:19.528 19:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.528 19:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.528 19:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.528 19:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:19.528 19:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:10:19.528 19:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:10:19.528 19:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:10:19.528 19:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:19.528 19:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:19.528 19:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:19.528 19:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:19.528 19:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:19.528 19:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:19.528 19:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.528 19:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.528 19:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.528 19:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:19.528 19:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:19.528 19:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:19.787 00:10:19.787 19:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:19.787 19:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:19.787 19:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:20.046 19:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:20.046 19:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:20.046 19:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.046 19:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.046 19:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.046 19:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:20.046 { 00:10:20.046 "cntlid": 107, 00:10:20.046 "qid": 0, 00:10:20.046 "state": "enabled", 00:10:20.046 "thread": "nvmf_tgt_poll_group_000", 00:10:20.046 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:10:20.046 "listen_address": { 00:10:20.046 "trtype": "TCP", 00:10:20.046 "adrfam": "IPv4", 00:10:20.046 "traddr": "10.0.0.3", 00:10:20.046 "trsvcid": "4420" 00:10:20.046 }, 00:10:20.046 "peer_address": { 00:10:20.046 "trtype": "TCP", 00:10:20.046 "adrfam": "IPv4", 00:10:20.046 "traddr": "10.0.0.1", 00:10:20.046 "trsvcid": "36444" 00:10:20.046 }, 00:10:20.046 "auth": { 00:10:20.046 "state": "completed", 00:10:20.046 "digest": "sha512", 00:10:20.046 "dhgroup": "ffdhe2048" 00:10:20.046 } 00:10:20.046 } 00:10:20.046 ]' 00:10:20.046 19:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:20.046 19:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:20.046 19:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:20.305 19:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:20.305 19:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:20.305 19:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:20.305 19:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:20.305 19:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:20.562 19:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDU4N2QxNTAwOWNiOTA5MzQxZTYxOTA2YzRmYjMxM2Jh56XD: --dhchap-ctrl-secret DHHC-1:02:NmJlYjViMjQzYTBhYjVlMTQ1YWZjMTNhM2M3YmFjMTNlZGY3YTg4YjYwMGY1NjQzMW7RBQ==: 00:10:20.562 19:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:01:NDU4N2QxNTAwOWNiOTA5MzQxZTYxOTA2YzRmYjMxM2Jh56XD: --dhchap-ctrl-secret DHHC-1:02:NmJlYjViMjQzYTBhYjVlMTQ1YWZjMTNhM2M3YmFjMTNlZGY3YTg4YjYwMGY1NjQzMW7RBQ==: 00:10:21.130 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:21.130 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:21.130 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:10:21.130 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.130 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.130 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.130 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:21.130 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:10:21.130 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:10:21.130 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:10:21.130 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:21.130 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:21.130 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:21.130 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:21.130 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:21.130 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:21.130 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.130 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.130 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.130 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:21.130 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:21.130 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:21.388 00:10:21.388 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:21.388 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:21.388 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:21.646 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:21.646 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:21.646 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.646 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.646 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.646 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:21.646 { 00:10:21.646 "cntlid": 109, 00:10:21.646 "qid": 0, 00:10:21.646 "state": "enabled", 00:10:21.646 "thread": "nvmf_tgt_poll_group_000", 00:10:21.646 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:10:21.646 "listen_address": { 00:10:21.646 "trtype": "TCP", 00:10:21.646 "adrfam": "IPv4", 00:10:21.646 "traddr": "10.0.0.3", 00:10:21.646 "trsvcid": "4420" 00:10:21.646 }, 00:10:21.646 "peer_address": { 00:10:21.646 "trtype": "TCP", 00:10:21.646 "adrfam": "IPv4", 00:10:21.646 "traddr": "10.0.0.1", 00:10:21.646 "trsvcid": "36470" 00:10:21.646 }, 00:10:21.646 "auth": { 00:10:21.646 "state": "completed", 00:10:21.646 "digest": "sha512", 00:10:21.646 "dhgroup": "ffdhe2048" 00:10:21.646 } 00:10:21.646 } 00:10:21.646 ]' 00:10:21.646 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:21.646 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:21.646 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:21.646 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:21.646 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:21.904 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:21.904 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:21.904 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:21.904 19:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTViNGY5ZmRiYTg0YmNhNzdmNjY1Y2YxMzdlYjI5MWIzOTZlOGRmYWUxZjdhN2NjEiwkGw==: --dhchap-ctrl-secret DHHC-1:01:OGQ4OGMxZjUwODc2NWI5OTI5OGE4NDgxYmY5NWEwYjk6tdyP: 00:10:21.904 19:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:02:MTViNGY5ZmRiYTg0YmNhNzdmNjY1Y2YxMzdlYjI5MWIzOTZlOGRmYWUxZjdhN2NjEiwkGw==: --dhchap-ctrl-secret DHHC-1:01:OGQ4OGMxZjUwODc2NWI5OTI5OGE4NDgxYmY5NWEwYjk6tdyP: 00:10:22.470 19:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:22.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:22.470 19:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:10:22.470 19:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.470 19:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.470 19:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.471 19:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:22.471 19:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:10:22.471 19:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:10:23.036 19:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:10:23.036 19:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:23.036 19:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:23.036 19:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:23.036 19:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:23.036 19:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:23.036 19:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key3 00:10:23.036 19:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.036 19:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.036 19:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.036 19:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:23.036 19:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:23.036 19:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:23.036 00:10:23.036 19:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:23.036 19:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:23.036 19:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:23.295 19:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:23.295 19:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:23.295 19:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.295 19:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.295 19:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.295 19:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:23.295 { 00:10:23.295 "cntlid": 111, 00:10:23.295 "qid": 0, 00:10:23.295 "state": "enabled", 00:10:23.295 "thread": "nvmf_tgt_poll_group_000", 00:10:23.295 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:10:23.295 "listen_address": { 00:10:23.295 "trtype": "TCP", 00:10:23.295 "adrfam": "IPv4", 00:10:23.295 "traddr": "10.0.0.3", 00:10:23.295 "trsvcid": "4420" 00:10:23.295 }, 00:10:23.295 "peer_address": { 00:10:23.295 "trtype": "TCP", 00:10:23.295 "adrfam": "IPv4", 00:10:23.295 "traddr": "10.0.0.1", 00:10:23.295 "trsvcid": "48896" 00:10:23.295 }, 00:10:23.295 "auth": { 00:10:23.295 "state": "completed", 00:10:23.295 "digest": "sha512", 00:10:23.295 "dhgroup": "ffdhe2048" 00:10:23.295 } 00:10:23.295 } 00:10:23.295 ]' 00:10:23.295 19:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:23.295 19:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:23.295 19:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:23.553 19:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:23.553 19:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:23.553 19:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:23.553 19:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:23.553 19:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:23.553 19:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjBhNTE4OThjOWM1MjBjNzA0MjE0MDU1NWU5YWE2MmQyOTZmNWYyMTcwOWZlZmUzMTM2YzdhZmY2ZWYwNDY3NMoRYKs=: 00:10:23.553 19:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:03:ZjBhNTE4OThjOWM1MjBjNzA0MjE0MDU1NWU5YWE2MmQyOTZmNWYyMTcwOWZlZmUzMTM2YzdhZmY2ZWYwNDY3NMoRYKs=: 00:10:24.117 19:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:24.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:24.375 19:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:10:24.375 19:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.375 19:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.375 19:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.375 19:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:24.375 19:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:24.375 19:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:10:24.375 19:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:10:24.375 19:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:10:24.375 19:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:24.375 19:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:24.375 19:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:24.375 19:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:24.375 19:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:24.375 19:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:24.375 19:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.375 19:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.376 19:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.376 19:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:24.376 19:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:24.376 19:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:24.634 00:10:24.634 19:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:24.634 19:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:24.634 19:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:24.892 19:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:24.892 19:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:24.892 19:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.892 19:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.892 19:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.892 19:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:24.892 { 00:10:24.892 "cntlid": 113, 00:10:24.892 "qid": 0, 00:10:24.892 "state": "enabled", 00:10:24.892 "thread": "nvmf_tgt_poll_group_000", 00:10:24.892 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:10:24.892 "listen_address": { 00:10:24.892 "trtype": "TCP", 00:10:24.892 "adrfam": "IPv4", 00:10:24.892 "traddr": "10.0.0.3", 00:10:24.892 "trsvcid": "4420" 00:10:24.892 }, 00:10:24.892 "peer_address": { 00:10:24.892 "trtype": "TCP", 00:10:24.892 "adrfam": "IPv4", 00:10:24.892 "traddr": "10.0.0.1", 00:10:24.892 "trsvcid": "48930" 00:10:24.892 }, 00:10:24.892 "auth": { 00:10:24.892 "state": "completed", 00:10:24.892 "digest": "sha512", 00:10:24.892 "dhgroup": "ffdhe3072" 00:10:24.892 } 00:10:24.892 } 00:10:24.892 ]' 00:10:24.892 19:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:24.892 19:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:24.892 19:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:25.150 19:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:25.150 19:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:25.150 19:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:25.150 19:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:25.150 19:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:25.409 19:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDY1ZGEwOGRhYjUzZjE5NzI1ZjE1YWM1OWNmMDQ4ZTFjMGUzOTRhZGRlNDU1NmUwwJmTAQ==: --dhchap-ctrl-secret DHHC-1:03:YzgzZWVhZmJmNDNjOTRlODNlNmZhNDZkZGNjZTg0NWRkZjc0NWU0OTRiYWM0NzUwZWIzZjA4MTFkMWY2YTIzOTavnEE=: 00:10:25.409 19:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:00:ZDY1ZGEwOGRhYjUzZjE5NzI1ZjE1YWM1OWNmMDQ4ZTFjMGUzOTRhZGRlNDU1NmUwwJmTAQ==: --dhchap-ctrl-secret DHHC-1:03:YzgzZWVhZmJmNDNjOTRlODNlNmZhNDZkZGNjZTg0NWRkZjc0NWU0OTRiYWM0NzUwZWIzZjA4MTFkMWY2YTIzOTavnEE=: 00:10:25.976 19:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:25.976 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:25.976 19:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:10:25.976 19:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.976 19:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.976 19:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.976 19:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:25.976 19:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:10:25.976 19:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:10:25.976 19:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:10:25.976 19:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:25.976 19:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:25.976 19:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:25.976 19:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:25.976 19:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:25.976 19:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:25.976 19:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.976 19:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.976 19:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.976 19:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:25.976 19:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:25.976 19:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:26.233 00:10:26.233 19:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:26.233 19:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:26.233 19:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:26.490 19:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:26.490 19:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:26.490 19:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.490 19:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.490 19:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.490 19:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:26.490 { 00:10:26.490 "cntlid": 115, 00:10:26.490 "qid": 0, 00:10:26.490 "state": "enabled", 00:10:26.490 "thread": "nvmf_tgt_poll_group_000", 00:10:26.490 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:10:26.490 "listen_address": { 00:10:26.490 "trtype": "TCP", 00:10:26.490 "adrfam": "IPv4", 00:10:26.490 "traddr": "10.0.0.3", 00:10:26.490 "trsvcid": "4420" 00:10:26.490 }, 00:10:26.490 "peer_address": { 00:10:26.490 "trtype": "TCP", 00:10:26.490 "adrfam": "IPv4", 00:10:26.490 "traddr": "10.0.0.1", 00:10:26.490 "trsvcid": "48958" 00:10:26.490 }, 00:10:26.490 "auth": { 00:10:26.490 "state": "completed", 00:10:26.490 "digest": "sha512", 00:10:26.490 "dhgroup": "ffdhe3072" 00:10:26.490 } 00:10:26.490 } 00:10:26.490 ]' 00:10:26.490 19:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:26.490 19:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:26.490 19:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:26.490 19:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:26.490 19:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:26.748 19:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:26.748 19:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:26.748 19:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:26.748 19:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDU4N2QxNTAwOWNiOTA5MzQxZTYxOTA2YzRmYjMxM2Jh56XD: --dhchap-ctrl-secret DHHC-1:02:NmJlYjViMjQzYTBhYjVlMTQ1YWZjMTNhM2M3YmFjMTNlZGY3YTg4YjYwMGY1NjQzMW7RBQ==: 00:10:26.748 19:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:01:NDU4N2QxNTAwOWNiOTA5MzQxZTYxOTA2YzRmYjMxM2Jh56XD: --dhchap-ctrl-secret DHHC-1:02:NmJlYjViMjQzYTBhYjVlMTQ1YWZjMTNhM2M3YmFjMTNlZGY3YTg4YjYwMGY1NjQzMW7RBQ==: 00:10:27.345 19:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:27.345 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:27.345 19:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:10:27.345 19:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.345 19:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.345 19:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.345 19:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:27.345 19:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:10:27.345 19:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:10:27.606 19:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:10:27.606 19:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:27.606 19:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:27.606 19:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:27.606 19:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:27.606 19:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:27.606 19:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:27.606 19:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.606 19:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.606 19:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.606 19:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:27.606 19:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:27.606 19:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:27.864 00:10:27.864 19:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:27.864 19:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:27.864 19:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:28.122 19:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:28.122 19:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:28.122 19:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.122 19:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.122 19:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.122 19:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:28.122 { 00:10:28.122 "cntlid": 117, 00:10:28.122 "qid": 0, 00:10:28.122 "state": "enabled", 00:10:28.122 "thread": "nvmf_tgt_poll_group_000", 00:10:28.122 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:10:28.122 "listen_address": { 00:10:28.122 "trtype": "TCP", 00:10:28.122 "adrfam": "IPv4", 00:10:28.122 "traddr": "10.0.0.3", 00:10:28.122 "trsvcid": "4420" 00:10:28.122 }, 00:10:28.122 "peer_address": { 00:10:28.122 "trtype": "TCP", 00:10:28.122 "adrfam": "IPv4", 00:10:28.122 "traddr": "10.0.0.1", 00:10:28.122 "trsvcid": "48992" 00:10:28.122 }, 00:10:28.122 "auth": { 00:10:28.122 "state": "completed", 00:10:28.122 "digest": "sha512", 00:10:28.122 "dhgroup": "ffdhe3072" 00:10:28.122 } 00:10:28.122 } 00:10:28.122 ]' 00:10:28.122 19:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:28.122 19:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:28.122 19:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:28.122 19:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:28.122 19:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:28.379 19:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:28.379 19:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:28.379 19:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:28.380 19:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTViNGY5ZmRiYTg0YmNhNzdmNjY1Y2YxMzdlYjI5MWIzOTZlOGRmYWUxZjdhN2NjEiwkGw==: --dhchap-ctrl-secret DHHC-1:01:OGQ4OGMxZjUwODc2NWI5OTI5OGE4NDgxYmY5NWEwYjk6tdyP: 00:10:28.380 19:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:02:MTViNGY5ZmRiYTg0YmNhNzdmNjY1Y2YxMzdlYjI5MWIzOTZlOGRmYWUxZjdhN2NjEiwkGw==: --dhchap-ctrl-secret DHHC-1:01:OGQ4OGMxZjUwODc2NWI5OTI5OGE4NDgxYmY5NWEwYjk6tdyP: 00:10:28.944 19:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:28.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:28.944 19:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:10:28.944 19:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.944 19:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.944 19:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.944 19:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:28.944 19:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:10:28.944 19:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:10:29.202 19:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:10:29.202 19:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:29.202 19:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:29.202 19:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:29.202 19:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:29.202 19:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:29.202 19:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key3 00:10:29.202 19:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.202 19:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.202 19:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.202 19:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:29.202 19:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:29.202 19:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:29.461 00:10:29.461 19:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:29.461 19:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:29.461 19:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:29.719 19:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:29.719 19:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:29.719 19:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.719 19:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.719 19:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.719 19:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:29.719 { 00:10:29.719 "cntlid": 119, 00:10:29.719 "qid": 0, 00:10:29.719 "state": "enabled", 00:10:29.719 "thread": "nvmf_tgt_poll_group_000", 00:10:29.719 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:10:29.719 "listen_address": { 00:10:29.719 "trtype": "TCP", 00:10:29.719 "adrfam": "IPv4", 00:10:29.719 "traddr": "10.0.0.3", 00:10:29.719 "trsvcid": "4420" 00:10:29.719 }, 00:10:29.719 "peer_address": { 00:10:29.719 "trtype": "TCP", 00:10:29.719 "adrfam": "IPv4", 00:10:29.719 "traddr": "10.0.0.1", 00:10:29.719 "trsvcid": "49022" 00:10:29.719 }, 00:10:29.719 "auth": { 00:10:29.720 "state": "completed", 00:10:29.720 "digest": "sha512", 00:10:29.720 "dhgroup": "ffdhe3072" 00:10:29.720 } 00:10:29.720 } 00:10:29.720 ]' 00:10:29.720 19:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:29.720 19:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:29.720 19:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:29.720 19:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:29.720 19:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:29.982 19:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:29.982 19:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:29.982 19:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:29.982 19:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjBhNTE4OThjOWM1MjBjNzA0MjE0MDU1NWU5YWE2MmQyOTZmNWYyMTcwOWZlZmUzMTM2YzdhZmY2ZWYwNDY3NMoRYKs=: 00:10:29.982 19:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:03:ZjBhNTE4OThjOWM1MjBjNzA0MjE0MDU1NWU5YWE2MmQyOTZmNWYyMTcwOWZlZmUzMTM2YzdhZmY2ZWYwNDY3NMoRYKs=: 00:10:30.610 19:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:30.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:30.610 19:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:10:30.610 19:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.610 19:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.610 19:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.610 19:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:30.610 19:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:30.610 19:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:10:30.610 19:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:10:30.868 19:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:10:30.868 19:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:30.868 19:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:30.868 19:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:30.868 19:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:30.868 19:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:30.868 19:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:30.868 19:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.868 19:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.868 19:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.868 19:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:30.868 19:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:30.869 19:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:31.126 00:10:31.126 19:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:31.126 19:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:31.126 19:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:31.383 19:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:31.383 19:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:31.383 19:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.383 19:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.383 19:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.383 19:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:31.383 { 00:10:31.383 "cntlid": 121, 00:10:31.383 "qid": 0, 00:10:31.383 "state": "enabled", 00:10:31.383 "thread": "nvmf_tgt_poll_group_000", 00:10:31.383 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:10:31.383 "listen_address": { 00:10:31.383 "trtype": "TCP", 00:10:31.383 "adrfam": "IPv4", 00:10:31.383 "traddr": "10.0.0.3", 00:10:31.383 "trsvcid": "4420" 00:10:31.383 }, 00:10:31.383 "peer_address": { 00:10:31.383 "trtype": "TCP", 00:10:31.383 "adrfam": "IPv4", 00:10:31.383 "traddr": "10.0.0.1", 00:10:31.383 "trsvcid": "49048" 00:10:31.383 }, 00:10:31.383 "auth": { 00:10:31.383 "state": "completed", 00:10:31.383 "digest": "sha512", 00:10:31.383 "dhgroup": "ffdhe4096" 00:10:31.383 } 00:10:31.383 } 00:10:31.383 ]' 00:10:31.383 19:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:31.383 19:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:31.383 19:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:31.383 19:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:31.383 19:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:31.644 19:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:31.644 19:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:31.644 19:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:31.903 19:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDY1ZGEwOGRhYjUzZjE5NzI1ZjE1YWM1OWNmMDQ4ZTFjMGUzOTRhZGRlNDU1NmUwwJmTAQ==: --dhchap-ctrl-secret DHHC-1:03:YzgzZWVhZmJmNDNjOTRlODNlNmZhNDZkZGNjZTg0NWRkZjc0NWU0OTRiYWM0NzUwZWIzZjA4MTFkMWY2YTIzOTavnEE=: 00:10:31.903 19:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:00:ZDY1ZGEwOGRhYjUzZjE5NzI1ZjE1YWM1OWNmMDQ4ZTFjMGUzOTRhZGRlNDU1NmUwwJmTAQ==: --dhchap-ctrl-secret DHHC-1:03:YzgzZWVhZmJmNDNjOTRlODNlNmZhNDZkZGNjZTg0NWRkZjc0NWU0OTRiYWM0NzUwZWIzZjA4MTFkMWY2YTIzOTavnEE=: 00:10:32.467 19:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:32.467 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:32.467 19:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:10:32.467 19:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.467 19:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.467 19:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.467 19:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:32.467 19:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:10:32.467 19:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:10:32.467 19:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:10:32.467 19:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:32.467 19:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:32.467 19:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:32.467 19:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:32.467 19:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:32.467 19:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:32.467 19:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.467 19:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.467 19:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.467 19:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:32.467 19:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:32.467 19:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:32.725 00:10:32.725 19:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:32.725 19:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:32.725 19:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:32.983 19:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:32.983 19:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:32.983 19:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.983 19:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.983 19:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.983 19:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:32.983 { 00:10:32.983 "cntlid": 123, 00:10:32.983 "qid": 0, 00:10:32.983 "state": "enabled", 00:10:32.983 "thread": "nvmf_tgt_poll_group_000", 00:10:32.983 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:10:32.983 "listen_address": { 00:10:32.983 "trtype": "TCP", 00:10:32.983 "adrfam": "IPv4", 00:10:32.983 "traddr": "10.0.0.3", 00:10:32.983 "trsvcid": "4420" 00:10:32.983 }, 00:10:32.983 "peer_address": { 00:10:32.983 "trtype": "TCP", 00:10:32.983 "adrfam": "IPv4", 00:10:32.983 "traddr": "10.0.0.1", 00:10:32.983 "trsvcid": "60454" 00:10:32.983 }, 00:10:32.983 "auth": { 00:10:32.983 "state": "completed", 00:10:32.983 "digest": "sha512", 00:10:32.983 "dhgroup": "ffdhe4096" 00:10:32.983 } 00:10:32.983 } 00:10:32.983 ]' 00:10:32.983 19:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:32.983 19:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:32.983 19:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:32.983 19:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:32.983 19:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:32.983 19:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:32.983 19:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:32.983 19:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:33.240 19:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDU4N2QxNTAwOWNiOTA5MzQxZTYxOTA2YzRmYjMxM2Jh56XD: --dhchap-ctrl-secret DHHC-1:02:NmJlYjViMjQzYTBhYjVlMTQ1YWZjMTNhM2M3YmFjMTNlZGY3YTg4YjYwMGY1NjQzMW7RBQ==: 00:10:33.240 19:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:01:NDU4N2QxNTAwOWNiOTA5MzQxZTYxOTA2YzRmYjMxM2Jh56XD: --dhchap-ctrl-secret DHHC-1:02:NmJlYjViMjQzYTBhYjVlMTQ1YWZjMTNhM2M3YmFjMTNlZGY3YTg4YjYwMGY1NjQzMW7RBQ==: 00:10:33.805 19:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:33.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:33.805 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:10:33.805 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.805 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.805 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.805 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:33.805 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:10:33.805 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:10:34.063 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:10:34.063 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:34.063 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:34.063 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:34.063 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:34.063 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:34.063 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:34.063 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.063 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.063 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.063 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:34.063 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:34.063 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:34.320 00:10:34.320 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:34.320 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:34.320 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:34.578 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:34.578 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:34.578 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.578 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.578 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.578 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:34.578 { 00:10:34.578 "cntlid": 125, 00:10:34.578 "qid": 0, 00:10:34.578 "state": "enabled", 00:10:34.578 "thread": "nvmf_tgt_poll_group_000", 00:10:34.578 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:10:34.578 "listen_address": { 00:10:34.578 "trtype": "TCP", 00:10:34.578 "adrfam": "IPv4", 00:10:34.578 "traddr": "10.0.0.3", 00:10:34.578 "trsvcid": "4420" 00:10:34.578 }, 00:10:34.578 "peer_address": { 00:10:34.578 "trtype": "TCP", 00:10:34.578 "adrfam": "IPv4", 00:10:34.578 "traddr": "10.0.0.1", 00:10:34.578 "trsvcid": "60472" 00:10:34.578 }, 00:10:34.578 "auth": { 00:10:34.578 "state": "completed", 00:10:34.578 "digest": "sha512", 00:10:34.578 "dhgroup": "ffdhe4096" 00:10:34.578 } 00:10:34.578 } 00:10:34.578 ]' 00:10:34.578 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:34.578 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:34.578 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:34.578 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:34.578 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:34.836 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:34.836 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:34.836 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:34.836 19:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTViNGY5ZmRiYTg0YmNhNzdmNjY1Y2YxMzdlYjI5MWIzOTZlOGRmYWUxZjdhN2NjEiwkGw==: --dhchap-ctrl-secret DHHC-1:01:OGQ4OGMxZjUwODc2NWI5OTI5OGE4NDgxYmY5NWEwYjk6tdyP: 00:10:34.836 19:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:02:MTViNGY5ZmRiYTg0YmNhNzdmNjY1Y2YxMzdlYjI5MWIzOTZlOGRmYWUxZjdhN2NjEiwkGw==: --dhchap-ctrl-secret DHHC-1:01:OGQ4OGMxZjUwODc2NWI5OTI5OGE4NDgxYmY5NWEwYjk6tdyP: 00:10:35.401 19:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:35.401 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:35.401 19:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:10:35.401 19:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.401 19:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.660 19:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.660 19:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:35.660 19:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:10:35.660 19:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:10:35.660 19:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:10:35.660 19:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:35.660 19:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:35.660 19:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:35.660 19:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:35.660 19:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:35.660 19:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key3 00:10:35.660 19:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.660 19:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.660 19:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.660 19:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:35.660 19:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:35.660 19:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:36.226 00:10:36.226 19:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:36.226 19:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:36.226 19:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:36.226 19:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:36.226 19:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:36.226 19:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.226 19:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.226 19:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.226 19:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:36.226 { 00:10:36.226 "cntlid": 127, 00:10:36.226 "qid": 0, 00:10:36.226 "state": "enabled", 00:10:36.226 "thread": "nvmf_tgt_poll_group_000", 00:10:36.226 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:10:36.226 "listen_address": { 00:10:36.226 "trtype": "TCP", 00:10:36.226 "adrfam": "IPv4", 00:10:36.226 "traddr": "10.0.0.3", 00:10:36.226 "trsvcid": "4420" 00:10:36.226 }, 00:10:36.226 "peer_address": { 00:10:36.226 "trtype": "TCP", 00:10:36.226 "adrfam": "IPv4", 00:10:36.226 "traddr": "10.0.0.1", 00:10:36.226 "trsvcid": "60510" 00:10:36.226 }, 00:10:36.226 "auth": { 00:10:36.226 "state": "completed", 00:10:36.226 "digest": "sha512", 00:10:36.226 "dhgroup": "ffdhe4096" 00:10:36.226 } 00:10:36.226 } 00:10:36.226 ]' 00:10:36.226 19:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:36.226 19:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:36.226 19:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:36.484 19:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:36.484 19:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:36.484 19:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:36.484 19:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:36.484 19:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:36.742 19:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjBhNTE4OThjOWM1MjBjNzA0MjE0MDU1NWU5YWE2MmQyOTZmNWYyMTcwOWZlZmUzMTM2YzdhZmY2ZWYwNDY3NMoRYKs=: 00:10:36.742 19:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:03:ZjBhNTE4OThjOWM1MjBjNzA0MjE0MDU1NWU5YWE2MmQyOTZmNWYyMTcwOWZlZmUzMTM2YzdhZmY2ZWYwNDY3NMoRYKs=: 00:10:37.306 19:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:37.306 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:37.306 19:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:10:37.306 19:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.306 19:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.306 19:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.306 19:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:37.306 19:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:37.306 19:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:10:37.306 19:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:10:37.306 19:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:10:37.306 19:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:37.306 19:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:37.306 19:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:37.306 19:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:37.306 19:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:37.306 19:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:37.306 19:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.306 19:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.306 19:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.306 19:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:37.306 19:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:37.306 19:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:37.870 00:10:37.870 19:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:37.870 19:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:37.870 19:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:38.164 19:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:38.164 19:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:38.164 19:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.164 19:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.164 19:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.164 19:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:38.164 { 00:10:38.164 "cntlid": 129, 00:10:38.164 "qid": 0, 00:10:38.164 "state": "enabled", 00:10:38.164 "thread": "nvmf_tgt_poll_group_000", 00:10:38.164 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:10:38.164 "listen_address": { 00:10:38.164 "trtype": "TCP", 00:10:38.164 "adrfam": "IPv4", 00:10:38.164 "traddr": "10.0.0.3", 00:10:38.164 "trsvcid": "4420" 00:10:38.164 }, 00:10:38.164 "peer_address": { 00:10:38.164 "trtype": "TCP", 00:10:38.164 "adrfam": "IPv4", 00:10:38.164 "traddr": "10.0.0.1", 00:10:38.164 "trsvcid": "60536" 00:10:38.164 }, 00:10:38.164 "auth": { 00:10:38.164 "state": "completed", 00:10:38.164 "digest": "sha512", 00:10:38.164 "dhgroup": "ffdhe6144" 00:10:38.164 } 00:10:38.164 } 00:10:38.164 ]' 00:10:38.164 19:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:38.164 19:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:38.164 19:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:38.164 19:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:38.164 19:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:38.164 19:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:38.164 19:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:38.164 19:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:38.423 19:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDY1ZGEwOGRhYjUzZjE5NzI1ZjE1YWM1OWNmMDQ4ZTFjMGUzOTRhZGRlNDU1NmUwwJmTAQ==: --dhchap-ctrl-secret DHHC-1:03:YzgzZWVhZmJmNDNjOTRlODNlNmZhNDZkZGNjZTg0NWRkZjc0NWU0OTRiYWM0NzUwZWIzZjA4MTFkMWY2YTIzOTavnEE=: 00:10:38.423 19:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:00:ZDY1ZGEwOGRhYjUzZjE5NzI1ZjE1YWM1OWNmMDQ4ZTFjMGUzOTRhZGRlNDU1NmUwwJmTAQ==: --dhchap-ctrl-secret DHHC-1:03:YzgzZWVhZmJmNDNjOTRlODNlNmZhNDZkZGNjZTg0NWRkZjc0NWU0OTRiYWM0NzUwZWIzZjA4MTFkMWY2YTIzOTavnEE=: 00:10:38.987 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:38.987 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:38.987 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:10:38.987 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.987 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.987 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.987 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:38.987 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:10:38.987 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:10:39.243 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:10:39.243 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:39.243 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:39.243 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:39.243 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:39.243 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:39.243 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:39.243 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.243 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.243 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.243 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:39.243 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:39.244 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:39.501 00:10:39.501 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:39.501 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:39.501 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:39.758 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:39.758 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:39.758 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.758 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.758 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.758 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:39.758 { 00:10:39.758 "cntlid": 131, 00:10:39.758 "qid": 0, 00:10:39.758 "state": "enabled", 00:10:39.758 "thread": "nvmf_tgt_poll_group_000", 00:10:39.758 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:10:39.758 "listen_address": { 00:10:39.758 "trtype": "TCP", 00:10:39.758 "adrfam": "IPv4", 00:10:39.758 "traddr": "10.0.0.3", 00:10:39.758 "trsvcid": "4420" 00:10:39.758 }, 00:10:39.758 "peer_address": { 00:10:39.758 "trtype": "TCP", 00:10:39.758 "adrfam": "IPv4", 00:10:39.758 "traddr": "10.0.0.1", 00:10:39.758 "trsvcid": "60554" 00:10:39.758 }, 00:10:39.758 "auth": { 00:10:39.758 "state": "completed", 00:10:39.758 "digest": "sha512", 00:10:39.758 "dhgroup": "ffdhe6144" 00:10:39.758 } 00:10:39.758 } 00:10:39.758 ]' 00:10:39.758 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:39.758 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:39.758 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:39.758 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:39.758 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:39.758 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:39.758 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:39.758 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:40.015 19:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDU4N2QxNTAwOWNiOTA5MzQxZTYxOTA2YzRmYjMxM2Jh56XD: --dhchap-ctrl-secret DHHC-1:02:NmJlYjViMjQzYTBhYjVlMTQ1YWZjMTNhM2M3YmFjMTNlZGY3YTg4YjYwMGY1NjQzMW7RBQ==: 00:10:40.015 19:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:01:NDU4N2QxNTAwOWNiOTA5MzQxZTYxOTA2YzRmYjMxM2Jh56XD: --dhchap-ctrl-secret DHHC-1:02:NmJlYjViMjQzYTBhYjVlMTQ1YWZjMTNhM2M3YmFjMTNlZGY3YTg4YjYwMGY1NjQzMW7RBQ==: 00:10:40.579 19:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:40.579 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:40.579 19:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:10:40.579 19:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.579 19:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.579 19:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.579 19:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:40.579 19:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:10:40.579 19:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:10:40.836 19:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:10:40.836 19:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:40.836 19:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:40.836 19:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:40.836 19:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:40.836 19:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:40.836 19:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:40.836 19:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.836 19:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.836 19:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.836 19:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:40.836 19:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:40.836 19:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:41.401 00:10:41.401 19:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:41.401 19:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:41.401 19:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:41.401 19:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:41.401 19:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:41.401 19:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.401 19:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.401 19:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.401 19:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:41.401 { 00:10:41.401 "cntlid": 133, 00:10:41.401 "qid": 0, 00:10:41.401 "state": "enabled", 00:10:41.401 "thread": "nvmf_tgt_poll_group_000", 00:10:41.401 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:10:41.401 "listen_address": { 00:10:41.401 "trtype": "TCP", 00:10:41.402 "adrfam": "IPv4", 00:10:41.402 "traddr": "10.0.0.3", 00:10:41.402 "trsvcid": "4420" 00:10:41.402 }, 00:10:41.402 "peer_address": { 00:10:41.402 "trtype": "TCP", 00:10:41.402 "adrfam": "IPv4", 00:10:41.402 "traddr": "10.0.0.1", 00:10:41.402 "trsvcid": "60592" 00:10:41.402 }, 00:10:41.402 "auth": { 00:10:41.402 "state": "completed", 00:10:41.402 "digest": "sha512", 00:10:41.402 "dhgroup": "ffdhe6144" 00:10:41.402 } 00:10:41.402 } 00:10:41.402 ]' 00:10:41.402 19:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:41.402 19:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:41.402 19:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:41.660 19:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:41.660 19:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:41.660 19:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:41.660 19:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:41.660 19:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:41.660 19:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTViNGY5ZmRiYTg0YmNhNzdmNjY1Y2YxMzdlYjI5MWIzOTZlOGRmYWUxZjdhN2NjEiwkGw==: --dhchap-ctrl-secret DHHC-1:01:OGQ4OGMxZjUwODc2NWI5OTI5OGE4NDgxYmY5NWEwYjk6tdyP: 00:10:41.660 19:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:02:MTViNGY5ZmRiYTg0YmNhNzdmNjY1Y2YxMzdlYjI5MWIzOTZlOGRmYWUxZjdhN2NjEiwkGw==: --dhchap-ctrl-secret DHHC-1:01:OGQ4OGMxZjUwODc2NWI5OTI5OGE4NDgxYmY5NWEwYjk6tdyP: 00:10:42.611 19:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:42.612 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:42.612 19:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:10:42.612 19:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.612 19:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.612 19:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.612 19:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:42.612 19:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:10:42.612 19:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:10:42.612 19:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:10:42.612 19:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:42.612 19:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:42.612 19:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:42.612 19:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:42.612 19:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:42.612 19:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key3 00:10:42.612 19:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.612 19:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.612 19:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.612 19:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:42.612 19:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:42.612 19:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:42.869 00:10:42.869 19:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:42.869 19:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:42.869 19:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:43.126 19:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:43.126 19:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:43.126 19:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.126 19:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.126 19:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.126 19:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:43.126 { 00:10:43.126 "cntlid": 135, 00:10:43.126 "qid": 0, 00:10:43.126 "state": "enabled", 00:10:43.126 "thread": "nvmf_tgt_poll_group_000", 00:10:43.126 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:10:43.126 "listen_address": { 00:10:43.126 "trtype": "TCP", 00:10:43.126 "adrfam": "IPv4", 00:10:43.126 "traddr": "10.0.0.3", 00:10:43.126 "trsvcid": "4420" 00:10:43.126 }, 00:10:43.126 "peer_address": { 00:10:43.126 "trtype": "TCP", 00:10:43.126 "adrfam": "IPv4", 00:10:43.126 "traddr": "10.0.0.1", 00:10:43.126 "trsvcid": "60342" 00:10:43.126 }, 00:10:43.126 "auth": { 00:10:43.126 "state": "completed", 00:10:43.126 "digest": "sha512", 00:10:43.126 "dhgroup": "ffdhe6144" 00:10:43.126 } 00:10:43.126 } 00:10:43.126 ]' 00:10:43.126 19:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:43.127 19:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:43.127 19:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:43.384 19:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:43.384 19:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:43.384 19:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:43.384 19:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:43.384 19:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:43.384 19:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjBhNTE4OThjOWM1MjBjNzA0MjE0MDU1NWU5YWE2MmQyOTZmNWYyMTcwOWZlZmUzMTM2YzdhZmY2ZWYwNDY3NMoRYKs=: 00:10:43.384 19:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:03:ZjBhNTE4OThjOWM1MjBjNzA0MjE0MDU1NWU5YWE2MmQyOTZmNWYyMTcwOWZlZmUzMTM2YzdhZmY2ZWYwNDY3NMoRYKs=: 00:10:43.951 19:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:43.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:43.951 19:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:10:43.951 19:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.951 19:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.208 19:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.208 19:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:44.208 19:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:44.208 19:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:10:44.208 19:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:10:44.208 19:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:10:44.208 19:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:44.208 19:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:44.208 19:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:44.208 19:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:44.208 19:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:44.208 19:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:44.208 19:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.208 19:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.208 19:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.208 19:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:44.208 19:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:44.208 19:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:44.773 00:10:44.773 19:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:44.773 19:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:44.773 19:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:45.031 19:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:45.031 19:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:45.032 19:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.032 19:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.032 19:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.032 19:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:45.032 { 00:10:45.032 "cntlid": 137, 00:10:45.032 "qid": 0, 00:10:45.032 "state": "enabled", 00:10:45.032 "thread": "nvmf_tgt_poll_group_000", 00:10:45.032 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:10:45.032 "listen_address": { 00:10:45.032 "trtype": "TCP", 00:10:45.032 "adrfam": "IPv4", 00:10:45.032 "traddr": "10.0.0.3", 00:10:45.032 "trsvcid": "4420" 00:10:45.032 }, 00:10:45.032 "peer_address": { 00:10:45.032 "trtype": "TCP", 00:10:45.032 "adrfam": "IPv4", 00:10:45.032 "traddr": "10.0.0.1", 00:10:45.032 "trsvcid": "60360" 00:10:45.032 }, 00:10:45.032 "auth": { 00:10:45.032 "state": "completed", 00:10:45.032 "digest": "sha512", 00:10:45.032 "dhgroup": "ffdhe8192" 00:10:45.032 } 00:10:45.032 } 00:10:45.032 ]' 00:10:45.032 19:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:45.032 19:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:45.032 19:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:45.032 19:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:45.032 19:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:45.032 19:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:45.032 19:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:45.032 19:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:45.289 19:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDY1ZGEwOGRhYjUzZjE5NzI1ZjE1YWM1OWNmMDQ4ZTFjMGUzOTRhZGRlNDU1NmUwwJmTAQ==: --dhchap-ctrl-secret DHHC-1:03:YzgzZWVhZmJmNDNjOTRlODNlNmZhNDZkZGNjZTg0NWRkZjc0NWU0OTRiYWM0NzUwZWIzZjA4MTFkMWY2YTIzOTavnEE=: 00:10:45.289 19:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:00:ZDY1ZGEwOGRhYjUzZjE5NzI1ZjE1YWM1OWNmMDQ4ZTFjMGUzOTRhZGRlNDU1NmUwwJmTAQ==: --dhchap-ctrl-secret DHHC-1:03:YzgzZWVhZmJmNDNjOTRlODNlNmZhNDZkZGNjZTg0NWRkZjc0NWU0OTRiYWM0NzUwZWIzZjA4MTFkMWY2YTIzOTavnEE=: 00:10:46.224 19:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:46.224 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:46.224 19:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:10:46.224 19:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.224 19:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.224 19:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.224 19:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:46.224 19:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:10:46.224 19:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:10:46.224 19:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:10:46.224 19:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:46.224 19:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:46.224 19:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:46.224 19:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:46.224 19:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:46.224 19:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:46.224 19:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.224 19:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.224 19:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.224 19:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:46.224 19:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:46.224 19:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:46.788 00:10:46.788 19:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:46.788 19:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:46.788 19:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:47.046 19:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:47.046 19:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:47.046 19:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.046 19:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.046 19:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.046 19:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:47.046 { 00:10:47.046 "cntlid": 139, 00:10:47.046 "qid": 0, 00:10:47.046 "state": "enabled", 00:10:47.046 "thread": "nvmf_tgt_poll_group_000", 00:10:47.046 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:10:47.046 "listen_address": { 00:10:47.046 "trtype": "TCP", 00:10:47.046 "adrfam": "IPv4", 00:10:47.046 "traddr": "10.0.0.3", 00:10:47.046 "trsvcid": "4420" 00:10:47.046 }, 00:10:47.046 "peer_address": { 00:10:47.046 "trtype": "TCP", 00:10:47.046 "adrfam": "IPv4", 00:10:47.046 "traddr": "10.0.0.1", 00:10:47.046 "trsvcid": "60408" 00:10:47.046 }, 00:10:47.046 "auth": { 00:10:47.046 "state": "completed", 00:10:47.046 "digest": "sha512", 00:10:47.046 "dhgroup": "ffdhe8192" 00:10:47.046 } 00:10:47.046 } 00:10:47.046 ]' 00:10:47.046 19:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:47.046 19:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:47.046 19:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:47.046 19:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:47.046 19:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:47.046 19:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:47.046 19:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:47.046 19:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:47.303 19:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDU4N2QxNTAwOWNiOTA5MzQxZTYxOTA2YzRmYjMxM2Jh56XD: --dhchap-ctrl-secret DHHC-1:02:NmJlYjViMjQzYTBhYjVlMTQ1YWZjMTNhM2M3YmFjMTNlZGY3YTg4YjYwMGY1NjQzMW7RBQ==: 00:10:47.303 19:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:01:NDU4N2QxNTAwOWNiOTA5MzQxZTYxOTA2YzRmYjMxM2Jh56XD: --dhchap-ctrl-secret DHHC-1:02:NmJlYjViMjQzYTBhYjVlMTQ1YWZjMTNhM2M3YmFjMTNlZGY3YTg4YjYwMGY1NjQzMW7RBQ==: 00:10:47.867 19:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:47.867 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:47.867 19:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:10:47.867 19:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.867 19:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.125 19:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.125 19:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:48.125 19:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:10:48.125 19:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:10:48.125 19:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:10:48.125 19:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:48.125 19:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:48.125 19:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:48.125 19:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:48.125 19:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:48.126 19:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:48.126 19:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.126 19:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.126 19:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.126 19:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:48.126 19:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:48.126 19:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:48.692 00:10:48.692 19:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:48.692 19:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:48.692 19:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:48.951 19:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:48.951 19:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:48.951 19:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.951 19:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.951 19:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.951 19:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:48.951 { 00:10:48.951 "cntlid": 141, 00:10:48.951 "qid": 0, 00:10:48.951 "state": "enabled", 00:10:48.951 "thread": "nvmf_tgt_poll_group_000", 00:10:48.951 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:10:48.951 "listen_address": { 00:10:48.951 "trtype": "TCP", 00:10:48.951 "adrfam": "IPv4", 00:10:48.951 "traddr": "10.0.0.3", 00:10:48.951 "trsvcid": "4420" 00:10:48.951 }, 00:10:48.951 "peer_address": { 00:10:48.951 "trtype": "TCP", 00:10:48.951 "adrfam": "IPv4", 00:10:48.951 "traddr": "10.0.0.1", 00:10:48.951 "trsvcid": "60438" 00:10:48.951 }, 00:10:48.951 "auth": { 00:10:48.951 "state": "completed", 00:10:48.951 "digest": "sha512", 00:10:48.951 "dhgroup": "ffdhe8192" 00:10:48.951 } 00:10:48.951 } 00:10:48.951 ]' 00:10:48.951 19:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:48.951 19:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:48.951 19:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:49.209 19:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:49.209 19:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:49.209 19:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:49.209 19:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:49.209 19:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:49.209 19:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTViNGY5ZmRiYTg0YmNhNzdmNjY1Y2YxMzdlYjI5MWIzOTZlOGRmYWUxZjdhN2NjEiwkGw==: --dhchap-ctrl-secret DHHC-1:01:OGQ4OGMxZjUwODc2NWI5OTI5OGE4NDgxYmY5NWEwYjk6tdyP: 00:10:49.209 19:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:02:MTViNGY5ZmRiYTg0YmNhNzdmNjY1Y2YxMzdlYjI5MWIzOTZlOGRmYWUxZjdhN2NjEiwkGw==: --dhchap-ctrl-secret DHHC-1:01:OGQ4OGMxZjUwODc2NWI5OTI5OGE4NDgxYmY5NWEwYjk6tdyP: 00:10:49.774 19:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:49.774 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:49.774 19:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:10:49.774 19:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.774 19:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.032 19:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.032 19:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:50.032 19:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:10:50.032 19:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:10:50.032 19:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:10:50.032 19:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:50.032 19:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:50.032 19:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:50.032 19:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:50.032 19:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:50.032 19:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key3 00:10:50.032 19:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.032 19:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.032 19:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.032 19:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:50.032 19:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:50.032 19:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:50.598 00:10:50.598 19:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:50.598 19:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:50.598 19:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:50.856 19:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:50.856 19:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:50.856 19:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.856 19:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.856 19:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.856 19:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:50.856 { 00:10:50.856 "cntlid": 143, 00:10:50.856 "qid": 0, 00:10:50.856 "state": "enabled", 00:10:50.856 "thread": "nvmf_tgt_poll_group_000", 00:10:50.856 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:10:50.856 "listen_address": { 00:10:50.856 "trtype": "TCP", 00:10:50.856 "adrfam": "IPv4", 00:10:50.856 "traddr": "10.0.0.3", 00:10:50.856 "trsvcid": "4420" 00:10:50.856 }, 00:10:50.856 "peer_address": { 00:10:50.856 "trtype": "TCP", 00:10:50.856 "adrfam": "IPv4", 00:10:50.856 "traddr": "10.0.0.1", 00:10:50.856 "trsvcid": "60462" 00:10:50.856 }, 00:10:50.856 "auth": { 00:10:50.856 "state": "completed", 00:10:50.856 "digest": "sha512", 00:10:50.856 "dhgroup": "ffdhe8192" 00:10:50.856 } 00:10:50.856 } 00:10:50.856 ]' 00:10:50.856 19:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:50.856 19:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:50.856 19:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:50.856 19:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:50.856 19:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:50.856 19:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:50.856 19:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:50.856 19:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:51.115 19:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjBhNTE4OThjOWM1MjBjNzA0MjE0MDU1NWU5YWE2MmQyOTZmNWYyMTcwOWZlZmUzMTM2YzdhZmY2ZWYwNDY3NMoRYKs=: 00:10:51.115 19:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:03:ZjBhNTE4OThjOWM1MjBjNzA0MjE0MDU1NWU5YWE2MmQyOTZmNWYyMTcwOWZlZmUzMTM2YzdhZmY2ZWYwNDY3NMoRYKs=: 00:10:52.051 19:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:52.051 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:52.051 19:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:10:52.051 19:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.051 19:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.051 19:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.051 19:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:10:52.051 19:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:10:52.051 19:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:10:52.051 19:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:10:52.051 19:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:10:52.051 19:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:10:52.051 19:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:10:52.051 19:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:52.051 19:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:52.051 19:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:52.051 19:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:52.051 19:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:52.051 19:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:52.051 19:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.051 19:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.051 19:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.051 19:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:52.051 19:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:52.051 19:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:52.617 00:10:52.617 19:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:52.617 19:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:52.617 19:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:52.874 19:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:52.874 19:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:52.874 19:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.874 19:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.874 19:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.874 19:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:52.874 { 00:10:52.874 "cntlid": 145, 00:10:52.874 "qid": 0, 00:10:52.874 "state": "enabled", 00:10:52.874 "thread": "nvmf_tgt_poll_group_000", 00:10:52.874 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:10:52.874 "listen_address": { 00:10:52.874 "trtype": "TCP", 00:10:52.874 "adrfam": "IPv4", 00:10:52.874 "traddr": "10.0.0.3", 00:10:52.874 "trsvcid": "4420" 00:10:52.874 }, 00:10:52.874 "peer_address": { 00:10:52.874 "trtype": "TCP", 00:10:52.874 "adrfam": "IPv4", 00:10:52.875 "traddr": "10.0.0.1", 00:10:52.875 "trsvcid": "55388" 00:10:52.875 }, 00:10:52.875 "auth": { 00:10:52.875 "state": "completed", 00:10:52.875 "digest": "sha512", 00:10:52.875 "dhgroup": "ffdhe8192" 00:10:52.875 } 00:10:52.875 } 00:10:52.875 ]' 00:10:52.875 19:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:52.875 19:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:52.875 19:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:52.875 19:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:52.875 19:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:52.875 19:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:52.875 19:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:52.875 19:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:53.216 19:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDY1ZGEwOGRhYjUzZjE5NzI1ZjE1YWM1OWNmMDQ4ZTFjMGUzOTRhZGRlNDU1NmUwwJmTAQ==: --dhchap-ctrl-secret DHHC-1:03:YzgzZWVhZmJmNDNjOTRlODNlNmZhNDZkZGNjZTg0NWRkZjc0NWU0OTRiYWM0NzUwZWIzZjA4MTFkMWY2YTIzOTavnEE=: 00:10:53.216 19:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:00:ZDY1ZGEwOGRhYjUzZjE5NzI1ZjE1YWM1OWNmMDQ4ZTFjMGUzOTRhZGRlNDU1NmUwwJmTAQ==: --dhchap-ctrl-secret DHHC-1:03:YzgzZWVhZmJmNDNjOTRlODNlNmZhNDZkZGNjZTg0NWRkZjc0NWU0OTRiYWM0NzUwZWIzZjA4MTFkMWY2YTIzOTavnEE=: 00:10:53.782 19:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:53.782 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:53.782 19:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:10:53.782 19:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.782 19:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.782 19:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.782 19:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key1 00:10:53.782 19:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.782 19:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.782 19:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.782 19:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:10:53.782 19:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:10:53.782 19:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:10:53.782 19:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:10:53.782 19:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:53.782 19:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:10:53.782 19:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:53.782 19:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:10:53.782 19:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:10:53.782 19:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:10:54.346 request: 00:10:54.346 { 00:10:54.346 "name": "nvme0", 00:10:54.346 "trtype": "tcp", 00:10:54.346 "traddr": "10.0.0.3", 00:10:54.346 "adrfam": "ipv4", 00:10:54.346 "trsvcid": "4420", 00:10:54.346 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:10:54.346 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:10:54.346 "prchk_reftag": false, 00:10:54.346 "prchk_guard": false, 00:10:54.346 "hdgst": false, 00:10:54.346 "ddgst": false, 00:10:54.346 "dhchap_key": "key2", 00:10:54.346 "allow_unrecognized_csi": false, 00:10:54.346 "method": "bdev_nvme_attach_controller", 00:10:54.346 "req_id": 1 00:10:54.346 } 00:10:54.346 Got JSON-RPC error response 00:10:54.346 response: 00:10:54.346 { 00:10:54.346 "code": -5, 00:10:54.346 "message": "Input/output error" 00:10:54.346 } 00:10:54.346 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:10:54.346 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:54.346 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:54.346 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:54.346 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:10:54.346 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.346 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.346 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.346 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:54.346 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.346 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.346 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.346 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:10:54.346 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:10:54.346 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:10:54.346 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:10:54.346 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:54.346 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:10:54.346 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:54.346 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:10:54.346 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:10:54.346 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:10:54.912 request: 00:10:54.912 { 00:10:54.912 "name": "nvme0", 00:10:54.912 "trtype": "tcp", 00:10:54.912 "traddr": "10.0.0.3", 00:10:54.912 "adrfam": "ipv4", 00:10:54.912 "trsvcid": "4420", 00:10:54.912 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:10:54.912 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:10:54.912 "prchk_reftag": false, 00:10:54.912 "prchk_guard": false, 00:10:54.912 "hdgst": false, 00:10:54.912 "ddgst": false, 00:10:54.912 "dhchap_key": "key1", 00:10:54.912 "dhchap_ctrlr_key": "ckey2", 00:10:54.912 "allow_unrecognized_csi": false, 00:10:54.912 "method": "bdev_nvme_attach_controller", 00:10:54.912 "req_id": 1 00:10:54.912 } 00:10:54.912 Got JSON-RPC error response 00:10:54.912 response: 00:10:54.912 { 00:10:54.912 "code": -5, 00:10:54.912 "message": "Input/output error" 00:10:54.912 } 00:10:54.912 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:10:54.912 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:54.912 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:54.912 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:54.912 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:10:54.912 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.912 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.912 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.912 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key1 00:10:54.912 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.912 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.912 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.912 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:54.912 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:10:54.912 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:54.912 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:10:54.912 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:54.912 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:10:54.912 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:54.912 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:54.912 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:54.912 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:55.170 request: 00:10:55.170 { 00:10:55.170 "name": "nvme0", 00:10:55.170 "trtype": "tcp", 00:10:55.170 "traddr": "10.0.0.3", 00:10:55.170 "adrfam": "ipv4", 00:10:55.170 "trsvcid": "4420", 00:10:55.170 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:10:55.170 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:10:55.170 "prchk_reftag": false, 00:10:55.170 "prchk_guard": false, 00:10:55.170 "hdgst": false, 00:10:55.170 "ddgst": false, 00:10:55.170 "dhchap_key": "key1", 00:10:55.170 "dhchap_ctrlr_key": "ckey1", 00:10:55.170 "allow_unrecognized_csi": false, 00:10:55.170 "method": "bdev_nvme_attach_controller", 00:10:55.170 "req_id": 1 00:10:55.170 } 00:10:55.170 Got JSON-RPC error response 00:10:55.170 response: 00:10:55.170 { 00:10:55.170 "code": -5, 00:10:55.170 "message": "Input/output error" 00:10:55.170 } 00:10:55.170 19:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:10:55.170 19:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:55.170 19:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:55.170 19:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:55.170 19:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:10:55.170 19:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.170 19:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.170 19:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.428 19:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 66166 00:10:55.428 19:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 66166 ']' 00:10:55.428 19:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 66166 00:10:55.428 19:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:10:55.428 19:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:55.428 19:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66166 00:10:55.428 19:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:55.428 killing process with pid 66166 00:10:55.428 19:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:55.428 19:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66166' 00:10:55.428 19:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 66166 00:10:55.428 19:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 66166 00:10:55.428 19:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:10:55.428 19:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:55.428 19:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:55.428 19:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.428 19:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=68916 00:10:55.428 19:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 68916 00:10:55.428 19:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 68916 ']' 00:10:55.428 19:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.428 19:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:55.428 19:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.428 19:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:55.428 19:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:10:55.428 19:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.360 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:56.360 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:10:56.360 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:56.360 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:56.360 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.360 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:56.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:56.360 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:10:56.360 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 68916 00:10:56.360 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 68916 ']' 00:10:56.360 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:56.360 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:56.360 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:56.360 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:56.360 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.618 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:56.618 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:10:56.618 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:10:56.618 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.618 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.618 null0 00:10:56.618 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.618 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:10:56.618 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.rUM 00:10:56.618 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.618 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.618 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.618 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.wcb ]] 00:10:56.618 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.wcb 00:10:56.618 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.618 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.618 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.618 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:10:56.618 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.R6Q 00:10:56.618 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.618 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.618 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.618 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.c2k ]] 00:10:56.618 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.c2k 00:10:56.618 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.618 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.618 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.618 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:10:56.618 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.6Q3 00:10:56.618 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.618 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.618 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.618 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.buG ]] 00:10:56.618 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.buG 00:10:56.618 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.618 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.618 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.619 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:10:56.619 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.cVG 00:10:56.619 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.619 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.619 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.619 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:10:56.619 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:10:56.619 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:56.619 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:10:56.619 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:56.619 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:56.619 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:56.619 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key3 00:10:56.619 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.619 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.619 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.619 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:56.619 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:56.619 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:57.599 nvme0n1 00:10:57.599 19:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:57.599 19:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:57.599 19:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:57.858 19:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:57.858 19:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:57.858 19:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.858 19:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.858 19:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.858 19:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:57.858 { 00:10:57.858 "cntlid": 1, 00:10:57.858 "qid": 0, 00:10:57.858 "state": "enabled", 00:10:57.858 "thread": "nvmf_tgt_poll_group_000", 00:10:57.858 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:10:57.858 "listen_address": { 00:10:57.858 "trtype": "TCP", 00:10:57.858 "adrfam": "IPv4", 00:10:57.858 "traddr": "10.0.0.3", 00:10:57.858 "trsvcid": "4420" 00:10:57.858 }, 00:10:57.858 "peer_address": { 00:10:57.858 "trtype": "TCP", 00:10:57.858 "adrfam": "IPv4", 00:10:57.858 "traddr": "10.0.0.1", 00:10:57.858 "trsvcid": "55430" 00:10:57.858 }, 00:10:57.858 "auth": { 00:10:57.858 "state": "completed", 00:10:57.858 "digest": "sha512", 00:10:57.858 "dhgroup": "ffdhe8192" 00:10:57.858 } 00:10:57.858 } 00:10:57.858 ]' 00:10:57.858 19:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:57.858 19:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:10:57.858 19:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:57.858 19:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:57.858 19:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:57.859 19:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:57.859 19:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:57.859 19:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:58.116 19:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjBhNTE4OThjOWM1MjBjNzA0MjE0MDU1NWU5YWE2MmQyOTZmNWYyMTcwOWZlZmUzMTM2YzdhZmY2ZWYwNDY3NMoRYKs=: 00:10:58.116 19:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:03:ZjBhNTE4OThjOWM1MjBjNzA0MjE0MDU1NWU5YWE2MmQyOTZmNWYyMTcwOWZlZmUzMTM2YzdhZmY2ZWYwNDY3NMoRYKs=: 00:10:58.682 19:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:58.682 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:58.682 19:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:10:58.682 19:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.682 19:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.682 19:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.682 19:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key3 00:10:58.682 19:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.682 19:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.682 19:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.682 19:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:10:58.682 19:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:10:58.940 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:10:58.940 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:10:58.940 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:10:58.940 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:10:58.940 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:58.941 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:10:58.941 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:58.941 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:58.941 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:58.941 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:59.198 request: 00:10:59.198 { 00:10:59.198 "name": "nvme0", 00:10:59.198 "trtype": "tcp", 00:10:59.198 "traddr": "10.0.0.3", 00:10:59.198 "adrfam": "ipv4", 00:10:59.198 "trsvcid": "4420", 00:10:59.198 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:10:59.198 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:10:59.198 "prchk_reftag": false, 00:10:59.198 "prchk_guard": false, 00:10:59.198 "hdgst": false, 00:10:59.198 "ddgst": false, 00:10:59.198 "dhchap_key": "key3", 00:10:59.198 "allow_unrecognized_csi": false, 00:10:59.198 "method": "bdev_nvme_attach_controller", 00:10:59.198 "req_id": 1 00:10:59.198 } 00:10:59.198 Got JSON-RPC error response 00:10:59.198 response: 00:10:59.198 { 00:10:59.198 "code": -5, 00:10:59.198 "message": "Input/output error" 00:10:59.198 } 00:10:59.198 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:10:59.198 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:59.198 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:59.198 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:59.198 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:10:59.198 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:10:59.198 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:10:59.198 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:10:59.456 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:10:59.456 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:10:59.456 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:10:59.456 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:10:59.456 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:59.456 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:10:59.456 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:59.456 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:59.456 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:59.456 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:59.714 request: 00:10:59.714 { 00:10:59.714 "name": "nvme0", 00:10:59.714 "trtype": "tcp", 00:10:59.714 "traddr": "10.0.0.3", 00:10:59.714 "adrfam": "ipv4", 00:10:59.714 "trsvcid": "4420", 00:10:59.714 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:10:59.714 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:10:59.714 "prchk_reftag": false, 00:10:59.714 "prchk_guard": false, 00:10:59.714 "hdgst": false, 00:10:59.714 "ddgst": false, 00:10:59.714 "dhchap_key": "key3", 00:10:59.714 "allow_unrecognized_csi": false, 00:10:59.714 "method": "bdev_nvme_attach_controller", 00:10:59.714 "req_id": 1 00:10:59.714 } 00:10:59.714 Got JSON-RPC error response 00:10:59.714 response: 00:10:59.714 { 00:10:59.714 "code": -5, 00:10:59.714 "message": "Input/output error" 00:10:59.714 } 00:10:59.714 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:10:59.714 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:59.714 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:59.714 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:59.714 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:10:59.714 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:10:59.714 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:10:59.714 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:10:59.714 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:10:59.714 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:10:59.714 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:10:59.714 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.714 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.714 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.714 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:10:59.714 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.714 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.714 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.714 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:10:59.714 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:10:59.714 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:10:59.714 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:10:59.973 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:59.973 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:10:59.973 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:59.973 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:10:59.973 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:10:59.973 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:11:00.231 request: 00:11:00.231 { 00:11:00.231 "name": "nvme0", 00:11:00.231 "trtype": "tcp", 00:11:00.231 "traddr": "10.0.0.3", 00:11:00.231 "adrfam": "ipv4", 00:11:00.231 "trsvcid": "4420", 00:11:00.231 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:11:00.231 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:11:00.231 "prchk_reftag": false, 00:11:00.231 "prchk_guard": false, 00:11:00.231 "hdgst": false, 00:11:00.231 "ddgst": false, 00:11:00.231 "dhchap_key": "key0", 00:11:00.231 "dhchap_ctrlr_key": "key1", 00:11:00.231 "allow_unrecognized_csi": false, 00:11:00.231 "method": "bdev_nvme_attach_controller", 00:11:00.231 "req_id": 1 00:11:00.231 } 00:11:00.231 Got JSON-RPC error response 00:11:00.231 response: 00:11:00.231 { 00:11:00.231 "code": -5, 00:11:00.231 "message": "Input/output error" 00:11:00.231 } 00:11:00.231 19:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:11:00.231 19:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:00.231 19:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:00.232 19:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:00.232 19:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:11:00.232 19:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:11:00.232 19:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:11:00.489 nvme0n1 00:11:00.489 19:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:11:00.489 19:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:11:00.489 19:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:00.749 19:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:00.749 19:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:00.749 19:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:01.015 19:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key1 00:11:01.015 19:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.015 19:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.015 19:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.015 19:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:11:01.015 19:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:11:01.015 19:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:11:01.948 nvme0n1 00:11:01.948 19:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:11:01.948 19:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:11:01.948 19:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:01.948 19:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:01.948 19:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key2 --dhchap-ctrlr-key key3 00:11:01.948 19:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.948 19:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.948 19:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.948 19:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:11:01.948 19:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:01.948 19:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:11:02.205 19:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:02.205 19:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MTViNGY5ZmRiYTg0YmNhNzdmNjY1Y2YxMzdlYjI5MWIzOTZlOGRmYWUxZjdhN2NjEiwkGw==: --dhchap-ctrl-secret DHHC-1:03:ZjBhNTE4OThjOWM1MjBjNzA0MjE0MDU1NWU5YWE2MmQyOTZmNWYyMTcwOWZlZmUzMTM2YzdhZmY2ZWYwNDY3NMoRYKs=: 00:11:02.205 19:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid 91838eb1-5852-43eb-90b2-09876f360ab2 -l 0 --dhchap-secret DHHC-1:02:MTViNGY5ZmRiYTg0YmNhNzdmNjY1Y2YxMzdlYjI5MWIzOTZlOGRmYWUxZjdhN2NjEiwkGw==: --dhchap-ctrl-secret DHHC-1:03:ZjBhNTE4OThjOWM1MjBjNzA0MjE0MDU1NWU5YWE2MmQyOTZmNWYyMTcwOWZlZmUzMTM2YzdhZmY2ZWYwNDY3NMoRYKs=: 00:11:02.771 19:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:11:02.771 19:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:11:02.771 19:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:11:02.771 19:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:11:02.771 19:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:11:02.771 19:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:11:02.771 19:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:11:02.771 19:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:02.771 19:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:03.029 19:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:11:03.029 19:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:11:03.029 19:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:11:03.029 19:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:11:03.029 19:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:03.029 19:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:11:03.029 19:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:03.029 19:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:11:03.029 19:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:11:03.029 19:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:11:03.594 request: 00:11:03.594 { 00:11:03.594 "name": "nvme0", 00:11:03.594 "trtype": "tcp", 00:11:03.594 "traddr": "10.0.0.3", 00:11:03.594 "adrfam": "ipv4", 00:11:03.594 "trsvcid": "4420", 00:11:03.594 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:11:03.594 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2", 00:11:03.594 "prchk_reftag": false, 00:11:03.594 "prchk_guard": false, 00:11:03.594 "hdgst": false, 00:11:03.594 "ddgst": false, 00:11:03.594 "dhchap_key": "key1", 00:11:03.594 "allow_unrecognized_csi": false, 00:11:03.594 "method": "bdev_nvme_attach_controller", 00:11:03.594 "req_id": 1 00:11:03.594 } 00:11:03.594 Got JSON-RPC error response 00:11:03.594 response: 00:11:03.594 { 00:11:03.594 "code": -5, 00:11:03.594 "message": "Input/output error" 00:11:03.594 } 00:11:03.594 19:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:11:03.594 19:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:03.594 19:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:03.594 19:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:03.594 19:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:11:03.594 19:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:11:03.594 19:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:11:04.526 nvme0n1 00:11:04.526 19:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:11:04.526 19:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:11:04.526 19:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:04.788 19:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:04.788 19:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:04.788 19:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:04.788 19:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:11:04.788 19:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.788 19:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.788 19:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.788 19:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:11:04.788 19:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:11:04.788 19:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:11:05.046 nvme0n1 00:11:05.046 19:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:11:05.046 19:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:11:05.046 19:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:05.303 19:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:05.303 19:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:05.303 19:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:05.561 19:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key1 --dhchap-ctrlr-key key3 00:11:05.561 19:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.561 19:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.561 19:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.561 19:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NDU4N2QxNTAwOWNiOTA5MzQxZTYxOTA2YzRmYjMxM2Jh56XD: '' 2s 00:11:05.561 19:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:11:05.561 19:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:11:05.561 19:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NDU4N2QxNTAwOWNiOTA5MzQxZTYxOTA2YzRmYjMxM2Jh56XD: 00:11:05.561 19:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:11:05.561 19:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:11:05.561 19:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:11:05.561 19:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NDU4N2QxNTAwOWNiOTA5MzQxZTYxOTA2YzRmYjMxM2Jh56XD: ]] 00:11:05.561 19:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NDU4N2QxNTAwOWNiOTA5MzQxZTYxOTA2YzRmYjMxM2Jh56XD: 00:11:05.561 19:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:11:05.561 19:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:11:05.561 19:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:11:08.090 19:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:11:08.090 19:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:11:08.090 19:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:11:08.090 19:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:11:08.090 19:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:11:08.090 19:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:11:08.090 19:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:11:08.090 19:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key1 --dhchap-ctrlr-key key2 00:11:08.090 19:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.090 19:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.090 19:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.090 19:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MTViNGY5ZmRiYTg0YmNhNzdmNjY1Y2YxMzdlYjI5MWIzOTZlOGRmYWUxZjdhN2NjEiwkGw==: 2s 00:11:08.090 19:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:11:08.090 19:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:11:08.090 19:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:11:08.090 19:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MTViNGY5ZmRiYTg0YmNhNzdmNjY1Y2YxMzdlYjI5MWIzOTZlOGRmYWUxZjdhN2NjEiwkGw==: 00:11:08.090 19:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:11:08.090 19:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:11:08.090 19:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:11:08.090 19:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MTViNGY5ZmRiYTg0YmNhNzdmNjY1Y2YxMzdlYjI5MWIzOTZlOGRmYWUxZjdhN2NjEiwkGw==: ]] 00:11:08.090 19:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MTViNGY5ZmRiYTg0YmNhNzdmNjY1Y2YxMzdlYjI5MWIzOTZlOGRmYWUxZjdhN2NjEiwkGw==: 00:11:08.090 19:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:11:08.090 19:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:11:09.991 19:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:11:09.991 19:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:11:09.991 19:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:11:09.991 19:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:11:09.991 19:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:11:09.991 19:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:11:09.991 19:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:11:09.991 19:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:09.991 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:09.991 19:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key0 --dhchap-ctrlr-key key1 00:11:09.991 19:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.991 19:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.991 19:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.991 19:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:11:09.991 19:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:11:09.991 19:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:11:10.560 nvme0n1 00:11:10.560 19:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key2 --dhchap-ctrlr-key key3 00:11:10.560 19:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.560 19:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.560 19:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.560 19:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:11:10.560 19:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:11:11.129 19:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:11:11.129 19:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:11.129 19:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:11:11.390 19:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:11.390 19:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:11:11.390 19:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.390 19:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.390 19:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.390 19:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:11:11.390 19:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:11:11.390 19:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:11:11.390 19:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:11:11.390 19:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:11.648 19:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:11.648 19:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key2 --dhchap-ctrlr-key key3 00:11:11.648 19:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.648 19:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.648 19:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.648 19:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:11:11.648 19:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:11:11.648 19:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:11:11.648 19:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:11:11.648 19:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:11.648 19:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:11:11.648 19:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:11.648 19:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:11:11.648 19:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:11:12.213 request: 00:11:12.213 { 00:11:12.213 "name": "nvme0", 00:11:12.213 "dhchap_key": "key1", 00:11:12.213 "dhchap_ctrlr_key": "key3", 00:11:12.213 "method": "bdev_nvme_set_keys", 00:11:12.213 "req_id": 1 00:11:12.213 } 00:11:12.213 Got JSON-RPC error response 00:11:12.213 response: 00:11:12.213 { 00:11:12.213 "code": -13, 00:11:12.213 "message": "Permission denied" 00:11:12.213 } 00:11:12.213 19:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:11:12.213 19:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:12.213 19:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:12.213 19:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:12.213 19:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:11:12.213 19:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:11:12.213 19:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:12.470 19:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:11:12.470 19:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:11:13.486 19:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:11:13.486 19:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:11:13.486 19:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:13.743 19:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:11:13.743 19:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key0 --dhchap-ctrlr-key key1 00:11:13.743 19:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.743 19:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.743 19:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.743 19:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:11:13.743 19:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:11:13.743 19:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:11:14.676 nvme0n1 00:11:14.676 19:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --dhchap-key key2 --dhchap-ctrlr-key key3 00:11:14.676 19:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.676 19:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.676 19:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.676 19:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:11:14.676 19:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:11:14.676 19:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:11:14.676 19:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:11:14.676 19:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:14.676 19:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:11:14.676 19:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:14.676 19:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:11:14.676 19:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:11:14.933 request: 00:11:14.933 { 00:11:14.933 "name": "nvme0", 00:11:14.933 "dhchap_key": "key2", 00:11:14.933 "dhchap_ctrlr_key": "key0", 00:11:14.933 "method": "bdev_nvme_set_keys", 00:11:14.933 "req_id": 1 00:11:14.933 } 00:11:14.933 Got JSON-RPC error response 00:11:14.933 response: 00:11:14.933 { 00:11:14.933 "code": -13, 00:11:14.933 "message": "Permission denied" 00:11:14.933 } 00:11:14.934 19:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:11:14.934 19:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:14.934 19:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:14.934 19:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:14.934 19:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:11:14.934 19:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:14.934 19:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:11:15.190 19:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:11:15.190 19:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:11:16.564 19:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:11:16.564 19:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:11:16.564 19:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:16.564 19:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:11:16.564 19:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:11:16.564 19:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:11:16.564 19:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 66198 00:11:16.564 19:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 66198 ']' 00:11:16.564 19:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 66198 00:11:16.564 19:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:11:16.564 19:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:16.564 19:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66198 00:11:16.564 killing process with pid 66198 00:11:16.565 19:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:16.565 19:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:16.565 19:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66198' 00:11:16.565 19:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 66198 00:11:16.565 19:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 66198 00:11:16.565 19:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:11:16.565 19:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:16.565 19:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:11:16.823 19:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:16.823 19:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:11:16.823 19:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:16.823 19:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:16.823 rmmod nvme_tcp 00:11:16.823 rmmod nvme_fabrics 00:11:16.823 rmmod nvme_keyring 00:11:16.823 19:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:16.823 19:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:11:16.823 19:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:11:16.823 19:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 68916 ']' 00:11:16.823 19:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 68916 00:11:16.823 19:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 68916 ']' 00:11:16.823 19:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 68916 00:11:16.823 19:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:11:16.823 19:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:16.823 19:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68916 00:11:16.823 killing process with pid 68916 00:11:16.823 19:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:16.823 19:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:16.823 19:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68916' 00:11:16.823 19:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 68916 00:11:16.823 19:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 68916 00:11:16.823 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:16.823 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:16.823 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:16.823 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:11:16.823 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:11:16.823 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:11:16.823 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:16.823 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:16.823 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:16.823 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:16.823 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:16.823 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:16.823 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:17.082 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:17.082 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:17.082 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:17.082 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:17.082 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:17.082 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:17.082 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:17.082 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:17.082 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:17.082 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:17.082 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.082 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:17.082 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.082 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:11:17.082 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.rUM /tmp/spdk.key-sha256.R6Q /tmp/spdk.key-sha384.6Q3 /tmp/spdk.key-sha512.cVG /tmp/spdk.key-sha512.wcb /tmp/spdk.key-sha384.c2k /tmp/spdk.key-sha256.buG '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:11:17.082 00:11:17.082 real 2m33.639s 00:11:17.082 user 6m1.928s 00:11:17.082 sys 0m20.107s 00:11:17.082 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:17.082 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.082 ************************************ 00:11:17.082 END TEST nvmf_auth_target 00:11:17.082 ************************************ 00:11:17.082 19:45:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:11:17.082 19:45:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:11:17.082 19:45:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:17.082 19:45:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:17.082 19:45:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:17.082 ************************************ 00:11:17.082 START TEST nvmf_bdevio_no_huge 00:11:17.082 ************************************ 00:11:17.082 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:11:17.341 * Looking for test storage... 00:11:17.341 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:17.341 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:17.341 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:11:17.341 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:17.341 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:17.341 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:17.341 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:17.341 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:17.341 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:17.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.342 --rc genhtml_branch_coverage=1 00:11:17.342 --rc genhtml_function_coverage=1 00:11:17.342 --rc genhtml_legend=1 00:11:17.342 --rc geninfo_all_blocks=1 00:11:17.342 --rc geninfo_unexecuted_blocks=1 00:11:17.342 00:11:17.342 ' 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:17.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.342 --rc genhtml_branch_coverage=1 00:11:17.342 --rc genhtml_function_coverage=1 00:11:17.342 --rc genhtml_legend=1 00:11:17.342 --rc geninfo_all_blocks=1 00:11:17.342 --rc geninfo_unexecuted_blocks=1 00:11:17.342 00:11:17.342 ' 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:17.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.342 --rc genhtml_branch_coverage=1 00:11:17.342 --rc genhtml_function_coverage=1 00:11:17.342 --rc genhtml_legend=1 00:11:17.342 --rc geninfo_all_blocks=1 00:11:17.342 --rc geninfo_unexecuted_blocks=1 00:11:17.342 00:11:17.342 ' 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:17.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.342 --rc genhtml_branch_coverage=1 00:11:17.342 --rc genhtml_function_coverage=1 00:11:17.342 --rc genhtml_legend=1 00:11:17.342 --rc geninfo_all_blocks=1 00:11:17.342 --rc geninfo_unexecuted_blocks=1 00:11:17.342 00:11:17.342 ' 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=91838eb1-5852-43eb-90b2-09876f360ab2 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:17.342 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:17.342 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:17.343 Cannot find device "nvmf_init_br" 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:17.343 Cannot find device "nvmf_init_br2" 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:17.343 Cannot find device "nvmf_tgt_br" 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:17.343 Cannot find device "nvmf_tgt_br2" 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:17.343 Cannot find device "nvmf_init_br" 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:17.343 Cannot find device "nvmf_init_br2" 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:17.343 Cannot find device "nvmf_tgt_br" 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:17.343 Cannot find device "nvmf_tgt_br2" 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:17.343 Cannot find device "nvmf_br" 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:17.343 Cannot find device "nvmf_init_if" 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:17.343 Cannot find device "nvmf_init_if2" 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:17.343 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:17.343 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:17.343 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:17.601 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:17.601 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:17.601 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:17.601 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:17.601 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:17.601 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:17.601 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:17.601 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:17.601 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:17.601 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:17.601 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:17.601 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:17.601 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:17.601 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:17.601 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:17.601 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:17.601 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:17.601 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:17.601 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:17.601 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:17.601 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:17.601 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:17.601 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:17.601 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:17.601 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:11:17.601 00:11:17.601 --- 10.0.0.3 ping statistics --- 00:11:17.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.601 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:11:17.601 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:17.601 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:17.601 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.032 ms 00:11:17.601 00:11:17.601 --- 10.0.0.4 ping statistics --- 00:11:17.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.601 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:11:17.601 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:17.601 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:17.601 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:11:17.601 00:11:17.601 --- 10.0.0.1 ping statistics --- 00:11:17.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.601 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:11:17.601 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:17.601 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:17.601 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:11:17.601 00:11:17.601 --- 10.0.0.2 ping statistics --- 00:11:17.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.601 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:11:17.601 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:17.601 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:11:17.601 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:17.601 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:17.601 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:17.601 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:17.601 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:17.601 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:17.601 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:17.601 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:17.601 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:17.602 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:17.602 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:11:17.602 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=69528 00:11:17.602 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 69528 00:11:17.602 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:11:17.602 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 69528 ']' 00:11:17.602 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:17.602 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:17.602 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:17.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:17.602 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:17.602 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:11:17.602 [2024-11-26 19:45:12.743450] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:11:17.602 [2024-11-26 19:45:12.743515] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:11:17.859 [2024-11-26 19:45:12.892902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:17.859 [2024-11-26 19:45:12.942713] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:17.859 [2024-11-26 19:45:12.942750] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:17.859 [2024-11-26 19:45:12.942756] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:17.859 [2024-11-26 19:45:12.942761] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:17.859 [2024-11-26 19:45:12.942777] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:17.859 [2024-11-26 19:45:12.944793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:17.859 [2024-11-26 19:45:12.944917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:17.859 [2024-11-26 19:45:12.945007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:17.859 [2024-11-26 19:45:12.945081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:17.859 [2024-11-26 19:45:12.949821] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:18.425 19:45:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:18.425 19:45:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:11:18.425 19:45:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:18.425 19:45:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:18.425 19:45:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:11:18.425 19:45:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:18.425 19:45:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:18.425 19:45:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.425 19:45:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:11:18.425 [2024-11-26 19:45:13.637246] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:18.425 19:45:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.425 19:45:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:18.425 19:45:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.425 19:45:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:11:18.425 Malloc0 00:11:18.425 19:45:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.425 19:45:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:18.425 19:45:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.425 19:45:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:11:18.425 19:45:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.425 19:45:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:18.425 19:45:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.425 19:45:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:11:18.425 19:45:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.425 19:45:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:18.425 19:45:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.425 19:45:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:11:18.683 [2024-11-26 19:45:13.673845] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:18.683 19:45:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.683 19:45:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:11:18.683 19:45:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:18.683 19:45:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:11:18.683 19:45:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:11:18.683 19:45:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:18.683 19:45:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:18.683 { 00:11:18.683 "params": { 00:11:18.683 "name": "Nvme$subsystem", 00:11:18.683 "trtype": "$TEST_TRANSPORT", 00:11:18.683 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:18.683 "adrfam": "ipv4", 00:11:18.683 "trsvcid": "$NVMF_PORT", 00:11:18.683 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:18.683 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:18.683 "hdgst": ${hdgst:-false}, 00:11:18.683 "ddgst": ${ddgst:-false} 00:11:18.683 }, 00:11:18.683 "method": "bdev_nvme_attach_controller" 00:11:18.683 } 00:11:18.683 EOF 00:11:18.683 )") 00:11:18.683 19:45:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:11:18.683 19:45:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:11:18.683 19:45:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:11:18.683 19:45:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:18.683 "params": { 00:11:18.683 "name": "Nvme1", 00:11:18.683 "trtype": "tcp", 00:11:18.683 "traddr": "10.0.0.3", 00:11:18.683 "adrfam": "ipv4", 00:11:18.683 "trsvcid": "4420", 00:11:18.683 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:18.683 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:18.683 "hdgst": false, 00:11:18.683 "ddgst": false 00:11:18.683 }, 00:11:18.684 "method": "bdev_nvme_attach_controller" 00:11:18.684 }' 00:11:18.684 [2024-11-26 19:45:13.712420] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:11:18.684 [2024-11-26 19:45:13.712484] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid69564 ] 00:11:18.684 [2024-11-26 19:45:13.852131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:18.684 [2024-11-26 19:45:13.901804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:18.684 [2024-11-26 19:45:13.901997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:18.684 [2024-11-26 19:45:13.902102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.684 [2024-11-26 19:45:13.915155] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:18.942 I/O targets: 00:11:18.942 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:18.942 00:11:18.942 00:11:18.942 CUnit - A unit testing framework for C - Version 2.1-3 00:11:18.942 http://cunit.sourceforge.net/ 00:11:18.942 00:11:18.942 00:11:18.942 Suite: bdevio tests on: Nvme1n1 00:11:18.942 Test: blockdev write read block ...passed 00:11:18.942 Test: blockdev write zeroes read block ...passed 00:11:18.942 Test: blockdev write zeroes read no split ...passed 00:11:18.942 Test: blockdev write zeroes read split ...passed 00:11:18.942 Test: blockdev write zeroes read split partial ...passed 00:11:18.942 Test: blockdev reset ...[2024-11-26 19:45:14.101829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:18.942 [2024-11-26 19:45:14.101910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16c9320 (9): Bad file descriptor 00:11:18.942 [2024-11-26 19:45:14.118210] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:18.942 passed 00:11:18.942 Test: blockdev write read 8 blocks ...passed 00:11:18.942 Test: blockdev write read size > 128k ...passed 00:11:18.942 Test: blockdev write read invalid size ...passed 00:11:18.942 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:18.942 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:18.942 Test: blockdev write read max offset ...passed 00:11:18.942 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:18.942 Test: blockdev writev readv 8 blocks ...passed 00:11:18.942 Test: blockdev writev readv 30 x 1block ...passed 00:11:18.942 Test: blockdev writev readv block ...passed 00:11:18.942 Test: blockdev writev readv size > 128k ...passed 00:11:18.942 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:18.942 Test: blockdev comparev and writev ...[2024-11-26 19:45:14.124023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:18.942 [2024-11-26 19:45:14.124127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:18.942 [2024-11-26 19:45:14.124183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:18.942 [2024-11-26 19:45:14.124223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:18.942 [2024-11-26 19:45:14.124508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:18.942 [2024-11-26 19:45:14.124569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:18.942 [2024-11-26 19:45:14.124609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:18.942 [2024-11-26 19:45:14.124642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:18.942 [2024-11-26 19:45:14.124986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:18.942 [2024-11-26 19:45:14.125031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:18.942 [2024-11-26 19:45:14.125077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:18.942 [2024-11-26 19:45:14.125111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:18.942 [2024-11-26 19:45:14.125363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:18.942 [2024-11-26 19:45:14.125413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:18.942 [2024-11-26 19:45:14.125456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:18.942 [2024-11-26 19:45:14.125491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:18.942 passed 00:11:18.942 Test: blockdev nvme passthru rw ...passed 00:11:18.942 Test: blockdev nvme passthru vendor specific ...[2024-11-26 19:45:14.126065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:18.942 [2024-11-26 19:45:14.126135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:18.942 [2024-11-26 19:45:14.126243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:18.942 [2024-11-26 19:45:14.126292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:18.942 [2024-11-26 19:45:14.126396] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:18.942 [2024-11-26 19:45:14.126441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:18.942 [2024-11-26 19:45:14.126545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:18.942 [2024-11-26 19:45:14.126584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:18.942 passed 00:11:18.942 Test: blockdev nvme admin passthru ...passed 00:11:18.942 Test: blockdev copy ...passed 00:11:18.942 00:11:18.942 Run Summary: Type Total Ran Passed Failed Inactive 00:11:18.942 suites 1 1 n/a 0 0 00:11:18.942 tests 23 23 23 0 0 00:11:18.942 asserts 152 152 152 0 n/a 00:11:18.942 00:11:18.942 Elapsed time = 0.145 seconds 00:11:19.199 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:19.199 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.199 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:11:19.199 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.199 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:19.199 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:11:19.199 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:19.199 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:11:19.199 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:19.199 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:11:19.199 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:19.199 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:19.199 rmmod nvme_tcp 00:11:19.456 rmmod nvme_fabrics 00:11:19.456 rmmod nvme_keyring 00:11:19.456 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:19.456 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:11:19.456 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:11:19.456 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 69528 ']' 00:11:19.456 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 69528 00:11:19.456 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 69528 ']' 00:11:19.456 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 69528 00:11:19.456 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:11:19.456 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:19.456 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69528 00:11:19.456 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:19.456 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:19.456 killing process with pid 69528 00:11:19.456 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69528' 00:11:19.456 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 69528 00:11:19.456 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 69528 00:11:19.713 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:19.713 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:19.713 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:19.713 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:11:19.713 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:11:19.713 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:11:19.713 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:19.713 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:19.713 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:19.714 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:19.714 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:19.714 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:19.714 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:19.714 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:19.714 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:19.714 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:19.714 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:19.714 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:19.714 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:19.714 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:19.972 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:19.972 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:19.972 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:19.972 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.972 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:19.972 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.972 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:11:19.972 00:11:19.972 real 0m2.758s 00:11:19.972 user 0m8.380s 00:11:19.972 sys 0m0.998s 00:11:19.972 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:19.972 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:11:19.972 ************************************ 00:11:19.972 END TEST nvmf_bdevio_no_huge 00:11:19.972 ************************************ 00:11:19.972 19:45:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:11:19.972 19:45:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:19.972 19:45:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:19.972 19:45:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:19.972 ************************************ 00:11:19.972 START TEST nvmf_tls 00:11:19.972 ************************************ 00:11:19.972 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:11:19.972 * Looking for test storage... 00:11:19.972 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:19.972 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:19.972 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:19.972 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:11:19.972 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:19.972 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:19.972 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:19.972 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:19.972 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:11:19.972 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:11:19.972 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:11:19.973 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:11:19.973 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:11:19.973 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:11:19.973 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:11:19.973 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:19.973 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:11:19.973 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:11:19.973 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:19.973 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:19.973 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:11:19.973 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:11:19.973 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:19.973 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:11:19.973 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:11:19.973 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:11:19.973 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:11:19.973 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:19.973 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:11:19.973 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:11:19.973 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:19.973 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:19.973 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:11:19.973 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:19.973 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:19.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.973 --rc genhtml_branch_coverage=1 00:11:19.973 --rc genhtml_function_coverage=1 00:11:19.973 --rc genhtml_legend=1 00:11:19.973 --rc geninfo_all_blocks=1 00:11:19.973 --rc geninfo_unexecuted_blocks=1 00:11:19.973 00:11:19.973 ' 00:11:19.973 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:19.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.973 --rc genhtml_branch_coverage=1 00:11:19.973 --rc genhtml_function_coverage=1 00:11:19.973 --rc genhtml_legend=1 00:11:19.973 --rc geninfo_all_blocks=1 00:11:19.973 --rc geninfo_unexecuted_blocks=1 00:11:19.973 00:11:19.973 ' 00:11:19.973 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:19.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.973 --rc genhtml_branch_coverage=1 00:11:19.973 --rc genhtml_function_coverage=1 00:11:19.973 --rc genhtml_legend=1 00:11:19.973 --rc geninfo_all_blocks=1 00:11:19.973 --rc geninfo_unexecuted_blocks=1 00:11:19.973 00:11:19.973 ' 00:11:19.973 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:19.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.973 --rc genhtml_branch_coverage=1 00:11:19.973 --rc genhtml_function_coverage=1 00:11:19.973 --rc genhtml_legend=1 00:11:19.973 --rc geninfo_all_blocks=1 00:11:19.973 --rc geninfo_unexecuted_blocks=1 00:11:19.973 00:11:19.973 ' 00:11:19.973 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:19.973 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:11:19.973 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:19.973 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:19.973 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:19.973 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:19.973 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:19.973 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:19.973 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:19.973 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:19.973 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:19.973 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:19.973 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:11:19.973 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=91838eb1-5852-43eb-90b2-09876f360ab2 00:11:19.973 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:19.973 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:19.973 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:19.973 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:19.973 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:19.973 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:20.230 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:20.230 Cannot find device "nvmf_init_br" 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:20.230 Cannot find device "nvmf_init_br2" 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:20.230 Cannot find device "nvmf_tgt_br" 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:20.230 Cannot find device "nvmf_tgt_br2" 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:20.230 Cannot find device "nvmf_init_br" 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:20.230 Cannot find device "nvmf_init_br2" 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:20.230 Cannot find device "nvmf_tgt_br" 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:20.230 Cannot find device "nvmf_tgt_br2" 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:20.230 Cannot find device "nvmf_br" 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:20.230 Cannot find device "nvmf_init_if" 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:20.230 Cannot find device "nvmf_init_if2" 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:20.230 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:20.230 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:20.230 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:20.231 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:20.231 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:20.231 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:20.231 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:20.231 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:20.231 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:20.231 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:20.231 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:20.231 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:20.231 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:20.231 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:20.231 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:20.231 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:20.231 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:20.231 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:20.231 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:20.231 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:20.231 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:20.231 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:20.231 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:20.231 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:20.231 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:20.231 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:20.231 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:11:20.231 00:11:20.231 --- 10.0.0.3 ping statistics --- 00:11:20.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:20.231 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:11:20.231 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:20.231 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:20.231 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.025 ms 00:11:20.231 00:11:20.231 --- 10.0.0.4 ping statistics --- 00:11:20.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:20.231 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:11:20.231 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:20.489 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:20.489 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:11:20.489 00:11:20.489 --- 10.0.0.1 ping statistics --- 00:11:20.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:20.489 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:11:20.489 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:20.489 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:20.489 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.042 ms 00:11:20.489 00:11:20.489 --- 10.0.0.2 ping statistics --- 00:11:20.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:20.489 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:11:20.489 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:20.489 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:11:20.489 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:20.489 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:20.489 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:20.489 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:20.489 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:20.489 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:20.489 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:20.489 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:11:20.489 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:20.489 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:20.489 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:11:20.489 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=69795 00:11:20.489 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 69795 00:11:20.489 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:11:20.489 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 69795 ']' 00:11:20.489 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:20.489 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:20.489 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:20.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:20.489 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:20.489 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:11:20.489 [2024-11-26 19:45:15.543444] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:11:20.489 [2024-11-26 19:45:15.543501] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:20.489 [2024-11-26 19:45:15.685102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.489 [2024-11-26 19:45:15.721644] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:20.489 [2024-11-26 19:45:15.721686] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:20.489 [2024-11-26 19:45:15.721693] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:20.489 [2024-11-26 19:45:15.721698] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:20.489 [2024-11-26 19:45:15.721702] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:20.489 [2024-11-26 19:45:15.721962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:21.487 19:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:21.487 19:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:11:21.487 19:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:21.487 19:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:21.487 19:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:11:21.487 19:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:21.487 19:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:11:21.487 19:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:11:21.487 true 00:11:21.487 19:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:21.487 19:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:11:21.746 19:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:11:21.746 19:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:11:21.746 19:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:11:22.004 19:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:22.004 19:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:11:22.262 19:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:11:22.262 19:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:11:22.262 19:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:11:22.262 19:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:11:22.262 19:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:22.521 19:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:11:22.521 19:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:11:22.521 19:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:22.521 19:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:11:22.779 19:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:11:22.779 19:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:11:22.779 19:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:11:23.037 19:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:23.037 19:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:11:23.037 19:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:11:23.037 19:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:11:23.037 19:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:11:23.295 19:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:11:23.295 19:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:23.553 19:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:11:23.553 19:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:11:23.553 19:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:11:23.553 19:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:11:23.553 19:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:11:23.553 19:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:11:23.553 19:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:11:23.553 19:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:11:23.553 19:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:11:23.553 19:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:11:23.553 19:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:11:23.553 19:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:11:23.553 19:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:11:23.553 19:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:11:23.553 19:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:11:23.553 19:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:11:23.553 19:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:11:23.553 19:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:11:23.553 19:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:11:23.553 19:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.zx6RtXmBFc 00:11:23.553 19:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:11:23.553 19:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.aR2u7Kaluo 00:11:23.553 19:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:11:23.553 19:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:11:23.553 19:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.zx6RtXmBFc 00:11:23.553 19:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.aR2u7Kaluo 00:11:23.553 19:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:11:23.811 19:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:11:24.070 [2024-11-26 19:45:19.206922] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:24.070 19:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.zx6RtXmBFc 00:11:24.070 19:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.zx6RtXmBFc 00:11:24.070 19:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:11:24.328 [2024-11-26 19:45:19.443424] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:24.329 19:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:11:24.586 19:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:11:24.844 [2024-11-26 19:45:19.859473] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:24.844 [2024-11-26 19:45:19.859621] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:24.844 19:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:11:24.844 malloc0 00:11:24.844 19:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:25.102 19:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.zx6RtXmBFc 00:11:25.360 19:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:11:25.616 19:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.zx6RtXmBFc 00:11:37.810 Initializing NVMe Controllers 00:11:37.810 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:11:37.810 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:37.810 Initialization complete. Launching workers. 00:11:37.810 ======================================================== 00:11:37.810 Latency(us) 00:11:37.810 Device Information : IOPS MiB/s Average min max 00:11:37.810 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17559.30 68.59 3645.07 1058.05 12590.75 00:11:37.810 ======================================================== 00:11:37.810 Total : 17559.30 68.59 3645.07 1058.05 12590.75 00:11:37.810 00:11:37.810 19:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zx6RtXmBFc 00:11:37.810 19:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:37.810 19:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:37.810 19:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:37.810 19:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.zx6RtXmBFc 00:11:37.810 19:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:37.810 19:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=70022 00:11:37.811 19:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:37.811 19:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 70022 /var/tmp/bdevperf.sock 00:11:37.811 19:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 70022 ']' 00:11:37.811 19:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:37.811 19:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:37.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:37.811 19:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:37.811 19:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:37.811 19:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:11:37.811 19:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:37.811 [2024-11-26 19:45:30.980475] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:11:37.811 [2024-11-26 19:45:30.980538] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70022 ] 00:11:37.811 [2024-11-26 19:45:31.116399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.811 [2024-11-26 19:45:31.152515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:37.811 [2024-11-26 19:45:31.183325] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:37.811 19:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:37.811 19:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:11:37.811 19:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zx6RtXmBFc 00:11:37.811 19:45:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:11:37.811 [2024-11-26 19:45:32.299709] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:37.811 TLSTESTn1 00:11:37.811 19:45:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:11:37.811 Running I/O for 10 seconds... 00:11:39.352 6437.00 IOPS, 25.14 MiB/s [2024-11-26T19:45:35.533Z] 6723.50 IOPS, 26.26 MiB/s [2024-11-26T19:45:36.921Z] 6823.33 IOPS, 26.65 MiB/s [2024-11-26T19:45:37.488Z] 6865.00 IOPS, 26.82 MiB/s [2024-11-26T19:45:38.862Z] 6911.00 IOPS, 27.00 MiB/s [2024-11-26T19:45:39.795Z] 6947.00 IOPS, 27.14 MiB/s [2024-11-26T19:45:40.777Z] 6966.29 IOPS, 27.21 MiB/s [2024-11-26T19:45:41.710Z] 6979.12 IOPS, 27.26 MiB/s [2024-11-26T19:45:42.642Z] 6978.56 IOPS, 27.26 MiB/s [2024-11-26T19:45:42.642Z] 6993.60 IOPS, 27.32 MiB/s 00:11:47.395 Latency(us) 00:11:47.395 [2024-11-26T19:45:42.642Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:47.395 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:11:47.395 Verification LBA range: start 0x0 length 0x2000 00:11:47.396 TLSTESTn1 : 10.01 6999.16 27.34 0.00 0.00 18258.23 3604.48 16031.11 00:11:47.396 [2024-11-26T19:45:42.643Z] =================================================================================================================== 00:11:47.396 [2024-11-26T19:45:42.643Z] Total : 6999.16 27.34 0.00 0.00 18258.23 3604.48 16031.11 00:11:47.396 { 00:11:47.396 "results": [ 00:11:47.396 { 00:11:47.396 "job": "TLSTESTn1", 00:11:47.396 "core_mask": "0x4", 00:11:47.396 "workload": "verify", 00:11:47.396 "status": "finished", 00:11:47.396 "verify_range": { 00:11:47.396 "start": 0, 00:11:47.396 "length": 8192 00:11:47.396 }, 00:11:47.396 "queue_depth": 128, 00:11:47.396 "io_size": 4096, 00:11:47.396 "runtime": 10.010344, 00:11:47.396 "iops": 6999.160068824807, 00:11:47.396 "mibps": 27.340469018846903, 00:11:47.396 "io_failed": 0, 00:11:47.396 "io_timeout": 0, 00:11:47.396 "avg_latency_us": 18258.230292128515, 00:11:47.396 "min_latency_us": 3604.48, 00:11:47.396 "max_latency_us": 16031.113846153847 00:11:47.396 } 00:11:47.396 ], 00:11:47.396 "core_count": 1 00:11:47.396 } 00:11:47.396 19:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:47.396 19:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 70022 00:11:47.396 19:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 70022 ']' 00:11:47.396 19:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 70022 00:11:47.396 19:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:11:47.396 19:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:47.396 19:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70022 00:11:47.396 19:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:11:47.396 19:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:11:47.396 killing process with pid 70022 00:11:47.396 19:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70022' 00:11:47.396 Received shutdown signal, test time was about 10.000000 seconds 00:11:47.396 00:11:47.396 Latency(us) 00:11:47.396 [2024-11-26T19:45:42.643Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:47.396 [2024-11-26T19:45:42.643Z] =================================================================================================================== 00:11:47.396 [2024-11-26T19:45:42.643Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:47.396 19:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 70022 00:11:47.396 19:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 70022 00:11:47.396 19:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aR2u7Kaluo 00:11:47.396 19:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:11:47.396 19:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aR2u7Kaluo 00:11:47.396 19:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:11:47.396 19:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:47.396 19:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:11:47.396 19:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:47.396 19:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aR2u7Kaluo 00:11:47.396 19:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:47.396 19:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:47.396 19:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:47.396 19:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.aR2u7Kaluo 00:11:47.396 19:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:47.396 19:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=70160 00:11:47.396 19:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:47.396 19:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 70160 /var/tmp/bdevperf.sock 00:11:47.396 19:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 70160 ']' 00:11:47.396 19:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:47.396 19:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:47.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:47.396 19:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:47.396 19:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:47.396 19:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:11:47.396 19:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:47.654 [2024-11-26 19:45:42.667075] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:11:47.654 [2024-11-26 19:45:42.667145] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70160 ] 00:11:47.654 [2024-11-26 19:45:42.804904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.654 [2024-11-26 19:45:42.838811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:47.654 [2024-11-26 19:45:42.868219] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:48.587 19:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:48.587 19:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:11:48.587 19:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.aR2u7Kaluo 00:11:48.587 19:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:11:48.845 [2024-11-26 19:45:43.922727] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:48.846 [2024-11-26 19:45:43.927858] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:11:48.846 [2024-11-26 19:45:43.928523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b0ff0 (107): Transport endpoint is not connected 00:11:48.846 [2024-11-26 19:45:43.929515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b0ff0 (9): Bad file descriptor 00:11:48.846 [2024-11-26 19:45:43.930513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:11:48.846 [2024-11-26 19:45:43.930531] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:11:48.846 [2024-11-26 19:45:43.930537] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:11:48.846 [2024-11-26 19:45:43.930545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:11:48.846 request: 00:11:48.846 { 00:11:48.846 "name": "TLSTEST", 00:11:48.846 "trtype": "tcp", 00:11:48.846 "traddr": "10.0.0.3", 00:11:48.846 "adrfam": "ipv4", 00:11:48.846 "trsvcid": "4420", 00:11:48.846 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:48.846 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:48.846 "prchk_reftag": false, 00:11:48.846 "prchk_guard": false, 00:11:48.846 "hdgst": false, 00:11:48.846 "ddgst": false, 00:11:48.846 "psk": "key0", 00:11:48.846 "allow_unrecognized_csi": false, 00:11:48.846 "method": "bdev_nvme_attach_controller", 00:11:48.846 "req_id": 1 00:11:48.846 } 00:11:48.846 Got JSON-RPC error response 00:11:48.846 response: 00:11:48.846 { 00:11:48.846 "code": -5, 00:11:48.846 "message": "Input/output error" 00:11:48.846 } 00:11:48.846 19:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 70160 00:11:48.846 19:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 70160 ']' 00:11:48.846 19:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 70160 00:11:48.846 19:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:11:48.846 19:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:48.846 19:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70160 00:11:48.846 19:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:11:48.846 19:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:11:48.846 19:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70160' 00:11:48.846 killing process with pid 70160 00:11:48.846 19:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 70160 00:11:48.846 Received shutdown signal, test time was about 10.000000 seconds 00:11:48.846 00:11:48.846 Latency(us) 00:11:48.846 [2024-11-26T19:45:44.093Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:48.846 [2024-11-26T19:45:44.093Z] =================================================================================================================== 00:11:48.846 [2024-11-26T19:45:44.093Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:48.846 19:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 70160 00:11:48.846 19:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:11:48.846 19:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:11:48.846 19:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:48.846 19:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:48.846 19:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:48.846 19:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.zx6RtXmBFc 00:11:48.846 19:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:11:48.846 19:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.zx6RtXmBFc 00:11:48.846 19:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:11:48.846 19:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:48.846 19:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:11:48.846 19:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:48.846 19:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.zx6RtXmBFc 00:11:48.846 19:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:48.846 19:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:48.846 19:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:11:48.846 19:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.zx6RtXmBFc 00:11:48.846 19:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:48.846 19:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=70193 00:11:48.846 19:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:48.846 19:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 70193 /var/tmp/bdevperf.sock 00:11:48.846 19:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 70193 ']' 00:11:48.846 19:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:48.846 19:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:48.846 19:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:48.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:48.846 19:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:48.846 19:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:48.846 19:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:11:49.104 [2024-11-26 19:45:44.117540] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:11:49.104 [2024-11-26 19:45:44.117605] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70193 ] 00:11:49.104 [2024-11-26 19:45:44.253818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.104 [2024-11-26 19:45:44.285688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:49.104 [2024-11-26 19:45:44.313887] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:50.037 19:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:50.037 19:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:11:50.037 19:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zx6RtXmBFc 00:11:50.037 19:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:11:50.296 [2024-11-26 19:45:45.355877] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:50.296 [2024-11-26 19:45:45.359809] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:11:50.296 [2024-11-26 19:45:45.359836] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:11:50.296 [2024-11-26 19:45:45.359865] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:11:50.296 [2024-11-26 19:45:45.360627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1512ff0 (107): Transport endpoint is not connected 00:11:50.296 [2024-11-26 19:45:45.361619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1512ff0 (9): Bad file descriptor 00:11:50.296 [2024-11-26 19:45:45.362617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:11:50.296 [2024-11-26 19:45:45.362634] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:11:50.296 [2024-11-26 19:45:45.362639] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:11:50.296 [2024-11-26 19:45:45.362646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:11:50.296 request: 00:11:50.296 { 00:11:50.296 "name": "TLSTEST", 00:11:50.296 "trtype": "tcp", 00:11:50.296 "traddr": "10.0.0.3", 00:11:50.296 "adrfam": "ipv4", 00:11:50.296 "trsvcid": "4420", 00:11:50.296 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:50.296 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:11:50.296 "prchk_reftag": false, 00:11:50.296 "prchk_guard": false, 00:11:50.296 "hdgst": false, 00:11:50.296 "ddgst": false, 00:11:50.296 "psk": "key0", 00:11:50.296 "allow_unrecognized_csi": false, 00:11:50.296 "method": "bdev_nvme_attach_controller", 00:11:50.296 "req_id": 1 00:11:50.296 } 00:11:50.296 Got JSON-RPC error response 00:11:50.296 response: 00:11:50.296 { 00:11:50.296 "code": -5, 00:11:50.296 "message": "Input/output error" 00:11:50.296 } 00:11:50.296 19:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 70193 00:11:50.296 19:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 70193 ']' 00:11:50.296 19:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 70193 00:11:50.296 19:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:11:50.296 19:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:50.296 19:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70193 00:11:50.296 19:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:11:50.296 19:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:11:50.297 killing process with pid 70193 00:11:50.297 19:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70193' 00:11:50.297 Received shutdown signal, test time was about 10.000000 seconds 00:11:50.297 00:11:50.297 Latency(us) 00:11:50.297 [2024-11-26T19:45:45.544Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:50.297 [2024-11-26T19:45:45.544Z] =================================================================================================================== 00:11:50.297 [2024-11-26T19:45:45.544Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:50.297 19:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 70193 00:11:50.297 19:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 70193 00:11:50.297 19:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:11:50.297 19:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:11:50.297 19:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:50.297 19:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:50.297 19:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:50.297 19:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.zx6RtXmBFc 00:11:50.297 19:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:11:50.297 19:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.zx6RtXmBFc 00:11:50.297 19:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:11:50.297 19:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:50.297 19:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:11:50.297 19:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:50.297 19:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.zx6RtXmBFc 00:11:50.297 19:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:50.297 19:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:11:50.297 19:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:50.297 19:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.zx6RtXmBFc 00:11:50.297 19:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:50.297 19:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=70218 00:11:50.297 19:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:50.297 19:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 70218 /var/tmp/bdevperf.sock 00:11:50.297 19:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:50.297 19:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 70218 ']' 00:11:50.297 19:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:50.297 19:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:50.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:50.297 19:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:50.297 19:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:50.297 19:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:11:50.297 [2024-11-26 19:45:45.534961] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:11:50.297 [2024-11-26 19:45:45.535024] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70218 ] 00:11:50.556 [2024-11-26 19:45:45.668517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.556 [2024-11-26 19:45:45.700760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:50.556 [2024-11-26 19:45:45.729340] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:51.490 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:51.490 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:11:51.490 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zx6RtXmBFc 00:11:51.490 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:11:51.749 [2024-11-26 19:45:46.765658] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:51.749 [2024-11-26 19:45:46.769589] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:11:51.749 [2024-11-26 19:45:46.769616] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:11:51.749 [2024-11-26 19:45:46.769646] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:11:51.749 [2024-11-26 19:45:46.770413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916ff0 (107): Transport endpoint is not connected 00:11:51.749 [2024-11-26 19:45:46.771405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916ff0 (9): Bad file descriptor 00:11:51.749 [2024-11-26 19:45:46.772404] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:11:51.749 [2024-11-26 19:45:46.772418] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:11:51.749 [2024-11-26 19:45:46.772424] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:11:51.749 [2024-11-26 19:45:46.772431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:11:51.749 request: 00:11:51.749 { 00:11:51.749 "name": "TLSTEST", 00:11:51.749 "trtype": "tcp", 00:11:51.749 "traddr": "10.0.0.3", 00:11:51.749 "adrfam": "ipv4", 00:11:51.749 "trsvcid": "4420", 00:11:51.749 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:11:51.749 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:51.749 "prchk_reftag": false, 00:11:51.749 "prchk_guard": false, 00:11:51.749 "hdgst": false, 00:11:51.749 "ddgst": false, 00:11:51.749 "psk": "key0", 00:11:51.749 "allow_unrecognized_csi": false, 00:11:51.749 "method": "bdev_nvme_attach_controller", 00:11:51.749 "req_id": 1 00:11:51.749 } 00:11:51.749 Got JSON-RPC error response 00:11:51.749 response: 00:11:51.749 { 00:11:51.749 "code": -5, 00:11:51.749 "message": "Input/output error" 00:11:51.749 } 00:11:51.749 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 70218 00:11:51.749 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 70218 ']' 00:11:51.749 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 70218 00:11:51.749 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:11:51.749 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:51.749 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70218 00:11:51.749 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:11:51.749 killing process with pid 70218 00:11:51.749 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:11:51.749 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70218' 00:11:51.749 Received shutdown signal, test time was about 10.000000 seconds 00:11:51.749 00:11:51.749 Latency(us) 00:11:51.749 [2024-11-26T19:45:46.996Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:51.749 [2024-11-26T19:45:46.996Z] =================================================================================================================== 00:11:51.749 [2024-11-26T19:45:46.996Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:51.749 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 70218 00:11:51.749 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 70218 00:11:51.749 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:11:51.749 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:11:51.749 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:51.749 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:51.749 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:51.749 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:11:51.749 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:11:51.749 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:11:51.749 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:11:51.749 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:51.750 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:11:51.750 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:51.750 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:11:51.750 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:51.750 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:51.750 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:51.750 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:11:51.750 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:51.750 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=70247 00:11:51.750 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:51.750 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 70247 /var/tmp/bdevperf.sock 00:11:51.750 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 70247 ']' 00:11:51.750 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:51.750 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:51.750 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:51.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:51.750 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:51.750 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:51.750 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:11:51.750 [2024-11-26 19:45:46.954428] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:11:51.750 [2024-11-26 19:45:46.954493] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70247 ] 00:11:52.009 [2024-11-26 19:45:47.082191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.009 [2024-11-26 19:45:47.114791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:52.009 [2024-11-26 19:45:47.143019] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:52.630 19:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:52.630 19:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:11:52.630 19:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:11:52.901 [2024-11-26 19:45:47.976964] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:11:52.901 [2024-11-26 19:45:47.977002] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:11:52.901 request: 00:11:52.901 { 00:11:52.901 "name": "key0", 00:11:52.901 "path": "", 00:11:52.901 "method": "keyring_file_add_key", 00:11:52.901 "req_id": 1 00:11:52.901 } 00:11:52.901 Got JSON-RPC error response 00:11:52.901 response: 00:11:52.901 { 00:11:52.901 "code": -1, 00:11:52.901 "message": "Operation not permitted" 00:11:52.901 } 00:11:52.901 19:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:11:53.159 [2024-11-26 19:45:48.201090] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:53.159 [2024-11-26 19:45:48.201130] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:11:53.159 request: 00:11:53.159 { 00:11:53.159 "name": "TLSTEST", 00:11:53.159 "trtype": "tcp", 00:11:53.159 "traddr": "10.0.0.3", 00:11:53.159 "adrfam": "ipv4", 00:11:53.159 "trsvcid": "4420", 00:11:53.159 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:53.159 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:53.159 "prchk_reftag": false, 00:11:53.159 "prchk_guard": false, 00:11:53.159 "hdgst": false, 00:11:53.159 "ddgst": false, 00:11:53.159 "psk": "key0", 00:11:53.159 "allow_unrecognized_csi": false, 00:11:53.159 "method": "bdev_nvme_attach_controller", 00:11:53.159 "req_id": 1 00:11:53.159 } 00:11:53.159 Got JSON-RPC error response 00:11:53.159 response: 00:11:53.159 { 00:11:53.159 "code": -126, 00:11:53.159 "message": "Required key not available" 00:11:53.159 } 00:11:53.159 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 70247 00:11:53.159 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 70247 ']' 00:11:53.159 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 70247 00:11:53.159 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:11:53.159 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:53.159 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70247 00:11:53.159 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:11:53.159 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:11:53.159 killing process with pid 70247 00:11:53.159 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70247' 00:11:53.159 Received shutdown signal, test time was about 10.000000 seconds 00:11:53.159 00:11:53.159 Latency(us) 00:11:53.159 [2024-11-26T19:45:48.406Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:53.159 [2024-11-26T19:45:48.406Z] =================================================================================================================== 00:11:53.159 [2024-11-26T19:45:48.406Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:53.159 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 70247 00:11:53.159 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 70247 00:11:53.159 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:11:53.159 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:11:53.159 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:53.159 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:53.159 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:53.159 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 69795 00:11:53.159 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 69795 ']' 00:11:53.159 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 69795 00:11:53.159 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:11:53.159 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:53.159 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69795 00:11:53.159 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:53.159 killing process with pid 69795 00:11:53.159 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:53.159 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69795' 00:11:53.159 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 69795 00:11:53.159 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 69795 00:11:53.418 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:11:53.418 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:11:53.418 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:11:53.418 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:11:53.418 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:11:53.418 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:11:53.418 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:11:53.418 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:11:53.418 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:11:53.418 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.ZiBJzuZ0GM 00:11:53.418 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:11:53.418 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.ZiBJzuZ0GM 00:11:53.418 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:11:53.418 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:53.418 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:53.418 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:11:53.418 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=70286 00:11:53.418 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 70286 00:11:53.418 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 70286 ']' 00:11:53.418 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.418 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:53.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.418 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.418 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:53.418 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:53.418 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:11:53.418 [2024-11-26 19:45:48.559123] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:11:53.418 [2024-11-26 19:45:48.559185] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:53.676 [2024-11-26 19:45:48.696394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:53.676 [2024-11-26 19:45:48.727024] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:53.676 [2024-11-26 19:45:48.727066] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:53.676 [2024-11-26 19:45:48.727072] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:53.676 [2024-11-26 19:45:48.727078] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:53.676 [2024-11-26 19:45:48.727083] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:53.676 [2024-11-26 19:45:48.727304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:53.676 [2024-11-26 19:45:48.755793] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:54.240 19:45:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:54.240 19:45:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:11:54.240 19:45:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:54.240 19:45:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:54.240 19:45:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:11:54.240 19:45:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:54.240 19:45:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.ZiBJzuZ0GM 00:11:54.240 19:45:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ZiBJzuZ0GM 00:11:54.240 19:45:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:11:54.496 [2024-11-26 19:45:49.639863] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:54.496 19:45:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:11:54.753 19:45:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:11:55.010 [2024-11-26 19:45:50.051929] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:55.010 [2024-11-26 19:45:50.052080] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:55.010 19:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:11:55.010 malloc0 00:11:55.010 19:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:55.268 19:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ZiBJzuZ0GM 00:11:55.581 19:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:11:55.581 19:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZiBJzuZ0GM 00:11:55.581 19:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:55.581 19:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:55.581 19:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:55.581 19:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ZiBJzuZ0GM 00:11:55.581 19:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:55.581 19:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=70336 00:11:55.581 19:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:55.581 19:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:55.581 19:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 70336 /var/tmp/bdevperf.sock 00:11:55.581 19:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 70336 ']' 00:11:55.581 19:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:55.581 19:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:55.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:55.581 19:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:55.581 19:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:55.581 19:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:11:55.581 [2024-11-26 19:45:50.795096] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:11:55.581 [2024-11-26 19:45:50.795151] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70336 ] 00:11:55.863 [2024-11-26 19:45:50.934121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.863 [2024-11-26 19:45:50.966663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:55.863 [2024-11-26 19:45:50.995944] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:56.793 19:45:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:56.793 19:45:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:11:56.793 19:45:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZiBJzuZ0GM 00:11:56.793 19:45:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:11:57.050 [2024-11-26 19:45:52.079130] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:57.050 TLSTESTn1 00:11:57.050 19:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:11:57.050 Running I/O for 10 seconds... 00:11:59.381 7049.00 IOPS, 27.54 MiB/s [2024-11-26T19:45:55.561Z] 6987.00 IOPS, 27.29 MiB/s [2024-11-26T19:45:56.492Z] 7044.67 IOPS, 27.52 MiB/s [2024-11-26T19:45:57.425Z] 7075.00 IOPS, 27.64 MiB/s [2024-11-26T19:45:58.359Z] 7099.20 IOPS, 27.73 MiB/s [2024-11-26T19:45:59.292Z] 7089.17 IOPS, 27.69 MiB/s [2024-11-26T19:46:00.666Z] 7089.71 IOPS, 27.69 MiB/s [2024-11-26T19:46:01.599Z] 7084.75 IOPS, 27.67 MiB/s [2024-11-26T19:46:02.552Z] 7065.00 IOPS, 27.60 MiB/s [2024-11-26T19:46:02.552Z] 7050.50 IOPS, 27.54 MiB/s 00:12:07.305 Latency(us) 00:12:07.305 [2024-11-26T19:46:02.552Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:07.305 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:07.305 Verification LBA range: start 0x0 length 0x2000 00:12:07.305 TLSTESTn1 : 10.01 7056.10 27.56 0.00 0.00 18112.00 3402.83 15728.64 00:12:07.305 [2024-11-26T19:46:02.552Z] =================================================================================================================== 00:12:07.305 [2024-11-26T19:46:02.552Z] Total : 7056.10 27.56 0.00 0.00 18112.00 3402.83 15728.64 00:12:07.305 { 00:12:07.305 "results": [ 00:12:07.305 { 00:12:07.305 "job": "TLSTESTn1", 00:12:07.305 "core_mask": "0x4", 00:12:07.305 "workload": "verify", 00:12:07.305 "status": "finished", 00:12:07.305 "verify_range": { 00:12:07.305 "start": 0, 00:12:07.305 "length": 8192 00:12:07.305 }, 00:12:07.305 "queue_depth": 128, 00:12:07.305 "io_size": 4096, 00:12:07.305 "runtime": 10.009637, 00:12:07.305 "iops": 7056.100036394926, 00:12:07.305 "mibps": 27.56289076716768, 00:12:07.305 "io_failed": 0, 00:12:07.305 "io_timeout": 0, 00:12:07.305 "avg_latency_us": 18111.996779705874, 00:12:07.305 "min_latency_us": 3402.8307692307694, 00:12:07.305 "max_latency_us": 15728.64 00:12:07.305 } 00:12:07.305 ], 00:12:07.305 "core_count": 1 00:12:07.305 } 00:12:07.305 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:07.305 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 70336 00:12:07.305 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 70336 ']' 00:12:07.305 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 70336 00:12:07.305 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:07.305 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:07.305 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70336 00:12:07.305 killing process with pid 70336 00:12:07.305 Received shutdown signal, test time was about 10.000000 seconds 00:12:07.305 00:12:07.305 Latency(us) 00:12:07.305 [2024-11-26T19:46:02.552Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:07.305 [2024-11-26T19:46:02.552Z] =================================================================================================================== 00:12:07.305 [2024-11-26T19:46:02.552Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:07.305 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:12:07.305 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:12:07.305 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70336' 00:12:07.305 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 70336 00:12:07.305 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 70336 00:12:07.305 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.ZiBJzuZ0GM 00:12:07.305 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZiBJzuZ0GM 00:12:07.305 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:12:07.305 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZiBJzuZ0GM 00:12:07.305 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:12:07.305 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:07.305 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:12:07.305 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:07.305 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZiBJzuZ0GM 00:12:07.305 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:07.305 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:07.305 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:07.305 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ZiBJzuZ0GM 00:12:07.305 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:07.305 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=70472 00:12:07.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:07.305 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:07.305 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 70472 /var/tmp/bdevperf.sock 00:12:07.305 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 70472 ']' 00:12:07.305 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:07.305 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:07.305 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:07.305 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:07.305 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:07.305 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:07.305 [2024-11-26 19:46:02.453957] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:12:07.305 [2024-11-26 19:46:02.454023] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70472 ] 00:12:07.563 [2024-11-26 19:46:02.588510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.563 [2024-11-26 19:46:02.620509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:07.563 [2024-11-26 19:46:02.648751] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:08.128 19:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:08.128 19:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:08.128 19:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZiBJzuZ0GM 00:12:08.386 [2024-11-26 19:46:03.552172] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ZiBJzuZ0GM': 0100666 00:12:08.386 [2024-11-26 19:46:03.552452] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:12:08.386 request: 00:12:08.386 { 00:12:08.386 "name": "key0", 00:12:08.386 "path": "/tmp/tmp.ZiBJzuZ0GM", 00:12:08.386 "method": "keyring_file_add_key", 00:12:08.386 "req_id": 1 00:12:08.386 } 00:12:08.386 Got JSON-RPC error response 00:12:08.386 response: 00:12:08.386 { 00:12:08.386 "code": -1, 00:12:08.386 "message": "Operation not permitted" 00:12:08.386 } 00:12:08.387 19:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:12:08.645 [2024-11-26 19:46:03.760290] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:08.645 [2024-11-26 19:46:03.760390] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:12:08.645 request: 00:12:08.645 { 00:12:08.645 "name": "TLSTEST", 00:12:08.645 "trtype": "tcp", 00:12:08.645 "traddr": "10.0.0.3", 00:12:08.645 "adrfam": "ipv4", 00:12:08.645 "trsvcid": "4420", 00:12:08.645 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:08.645 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:08.645 "prchk_reftag": false, 00:12:08.645 "prchk_guard": false, 00:12:08.645 "hdgst": false, 00:12:08.645 "ddgst": false, 00:12:08.645 "psk": "key0", 00:12:08.645 "allow_unrecognized_csi": false, 00:12:08.645 "method": "bdev_nvme_attach_controller", 00:12:08.645 "req_id": 1 00:12:08.645 } 00:12:08.645 Got JSON-RPC error response 00:12:08.645 response: 00:12:08.645 { 00:12:08.645 "code": -126, 00:12:08.645 "message": "Required key not available" 00:12:08.645 } 00:12:08.645 19:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 70472 00:12:08.645 19:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 70472 ']' 00:12:08.645 19:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 70472 00:12:08.645 19:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:08.645 19:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:08.645 19:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70472 00:12:08.645 killing process with pid 70472 00:12:08.645 Received shutdown signal, test time was about 10.000000 seconds 00:12:08.645 00:12:08.645 Latency(us) 00:12:08.645 [2024-11-26T19:46:03.892Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:08.645 [2024-11-26T19:46:03.892Z] =================================================================================================================== 00:12:08.645 [2024-11-26T19:46:03.892Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:08.645 19:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:12:08.645 19:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:12:08.645 19:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70472' 00:12:08.645 19:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 70472 00:12:08.645 19:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 70472 00:12:08.906 19:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:12:08.906 19:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:12:08.906 19:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:08.906 19:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:08.906 19:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:08.906 19:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 70286 00:12:08.906 19:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 70286 ']' 00:12:08.906 19:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 70286 00:12:08.906 19:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:08.906 19:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:08.906 19:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70286 00:12:08.906 killing process with pid 70286 00:12:08.906 19:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:08.906 19:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:08.906 19:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70286' 00:12:08.906 19:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 70286 00:12:08.906 19:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 70286 00:12:08.906 19:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:12:08.906 19:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:08.906 19:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:08.906 19:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:08.906 19:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=70510 00:12:08.906 19:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 70510 00:12:08.906 19:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 70510 ']' 00:12:08.906 19:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:08.906 19:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.906 19:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:08.906 19:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.906 19:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:08.906 19:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:08.906 [2024-11-26 19:46:04.074707] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:12:08.906 [2024-11-26 19:46:04.074778] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:09.165 [2024-11-26 19:46:04.211831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:09.165 [2024-11-26 19:46:04.241245] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:09.165 [2024-11-26 19:46:04.241278] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:09.165 [2024-11-26 19:46:04.241283] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:09.165 [2024-11-26 19:46:04.241286] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:09.165 [2024-11-26 19:46:04.241290] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:09.165 [2024-11-26 19:46:04.241508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:09.165 [2024-11-26 19:46:04.269069] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:09.731 19:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:09.731 19:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:09.731 19:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:09.731 19:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:09.731 19:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:09.731 19:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:09.731 19:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.ZiBJzuZ0GM 00:12:09.731 19:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:12:09.731 19:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.ZiBJzuZ0GM 00:12:09.731 19:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:12:09.731 19:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:09.731 19:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:12:09.731 19:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:09.731 19:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.ZiBJzuZ0GM 00:12:09.731 19:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ZiBJzuZ0GM 00:12:09.731 19:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:09.989 [2024-11-26 19:46:05.159335] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:09.989 19:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:10.246 19:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:12:10.504 [2024-11-26 19:46:05.559375] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:10.504 [2024-11-26 19:46:05.559523] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:10.504 19:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:10.762 malloc0 00:12:10.762 19:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:10.762 19:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ZiBJzuZ0GM 00:12:11.019 [2024-11-26 19:46:06.169222] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ZiBJzuZ0GM': 0100666 00:12:11.019 [2024-11-26 19:46:06.169258] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:12:11.019 request: 00:12:11.019 { 00:12:11.019 "name": "key0", 00:12:11.019 "path": "/tmp/tmp.ZiBJzuZ0GM", 00:12:11.019 "method": "keyring_file_add_key", 00:12:11.019 "req_id": 1 00:12:11.019 } 00:12:11.019 Got JSON-RPC error response 00:12:11.019 response: 00:12:11.019 { 00:12:11.019 "code": -1, 00:12:11.019 "message": "Operation not permitted" 00:12:11.019 } 00:12:11.019 19:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:12:11.277 [2024-11-26 19:46:06.377277] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:12:11.277 [2024-11-26 19:46:06.377319] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:12:11.277 request: 00:12:11.277 { 00:12:11.277 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:11.277 "host": "nqn.2016-06.io.spdk:host1", 00:12:11.277 "psk": "key0", 00:12:11.277 "method": "nvmf_subsystem_add_host", 00:12:11.277 "req_id": 1 00:12:11.277 } 00:12:11.277 Got JSON-RPC error response 00:12:11.277 response: 00:12:11.277 { 00:12:11.277 "code": -32603, 00:12:11.277 "message": "Internal error" 00:12:11.277 } 00:12:11.277 19:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:12:11.277 19:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:11.277 19:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:11.277 19:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:11.277 19:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 70510 00:12:11.277 19:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 70510 ']' 00:12:11.277 19:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 70510 00:12:11.277 19:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:11.277 19:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:11.277 19:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70510 00:12:11.277 killing process with pid 70510 00:12:11.277 19:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:11.277 19:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:11.277 19:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70510' 00:12:11.277 19:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 70510 00:12:11.277 19:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 70510 00:12:11.535 19:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.ZiBJzuZ0GM 00:12:11.535 19:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:12:11.535 19:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:11.535 19:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:11.535 19:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:11.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.535 19:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=70569 00:12:11.535 19:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:11.535 19:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 70569 00:12:11.535 19:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 70569 ']' 00:12:11.535 19:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.535 19:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:11.535 19:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.535 19:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:11.535 19:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:11.535 [2024-11-26 19:46:06.570226] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:12:11.535 [2024-11-26 19:46:06.570288] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:11.535 [2024-11-26 19:46:06.707885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.535 [2024-11-26 19:46:06.737694] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:11.535 [2024-11-26 19:46:06.737730] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:11.535 [2024-11-26 19:46:06.737736] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:11.535 [2024-11-26 19:46:06.737740] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:11.535 [2024-11-26 19:46:06.737743] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:11.535 [2024-11-26 19:46:06.737964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:11.535 [2024-11-26 19:46:06.765788] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:12.469 19:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:12.469 19:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:12.469 19:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:12.469 19:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:12.469 19:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:12.469 19:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:12.469 19:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.ZiBJzuZ0GM 00:12:12.469 19:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ZiBJzuZ0GM 00:12:12.469 19:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:12.469 [2024-11-26 19:46:07.664636] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:12.469 19:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:12.727 19:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:12:12.986 [2024-11-26 19:46:08.068703] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:12.986 [2024-11-26 19:46:08.068866] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:12.986 19:46:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:13.244 malloc0 00:12:13.244 19:46:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:13.244 19:46:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ZiBJzuZ0GM 00:12:13.521 19:46:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:12:13.784 19:46:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:13.784 19:46:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=70619 00:12:13.784 19:46:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:13.784 19:46:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 70619 /var/tmp/bdevperf.sock 00:12:13.784 19:46:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 70619 ']' 00:12:13.784 19:46:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:13.784 19:46:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:13.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:13.784 19:46:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:13.784 19:46:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:13.784 19:46:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:13.784 [2024-11-26 19:46:08.876066] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:12:13.784 [2024-11-26 19:46:08.876133] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70619 ] 00:12:13.784 [2024-11-26 19:46:09.019803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.041 [2024-11-26 19:46:09.056508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:14.041 [2024-11-26 19:46:09.086985] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:14.607 19:46:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:14.607 19:46:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:14.608 19:46:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZiBJzuZ0GM 00:12:14.867 19:46:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:12:14.867 [2024-11-26 19:46:10.091431] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:15.125 TLSTESTn1 00:12:15.125 19:46:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:15.383 19:46:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:12:15.383 "subsystems": [ 00:12:15.383 { 00:12:15.383 "subsystem": "keyring", 00:12:15.383 "config": [ 00:12:15.383 { 00:12:15.383 "method": "keyring_file_add_key", 00:12:15.383 "params": { 00:12:15.383 "name": "key0", 00:12:15.383 "path": "/tmp/tmp.ZiBJzuZ0GM" 00:12:15.383 } 00:12:15.383 } 00:12:15.383 ] 00:12:15.383 }, 00:12:15.383 { 00:12:15.383 "subsystem": "iobuf", 00:12:15.383 "config": [ 00:12:15.383 { 00:12:15.383 "method": "iobuf_set_options", 00:12:15.383 "params": { 00:12:15.383 "small_pool_count": 8192, 00:12:15.383 "large_pool_count": 1024, 00:12:15.383 "small_bufsize": 8192, 00:12:15.383 "large_bufsize": 135168, 00:12:15.383 "enable_numa": false 00:12:15.383 } 00:12:15.383 } 00:12:15.383 ] 00:12:15.383 }, 00:12:15.383 { 00:12:15.383 "subsystem": "sock", 00:12:15.383 "config": [ 00:12:15.383 { 00:12:15.383 "method": "sock_set_default_impl", 00:12:15.383 "params": { 00:12:15.383 "impl_name": "uring" 00:12:15.383 } 00:12:15.383 }, 00:12:15.383 { 00:12:15.383 "method": "sock_impl_set_options", 00:12:15.383 "params": { 00:12:15.383 "impl_name": "ssl", 00:12:15.383 "recv_buf_size": 4096, 00:12:15.383 "send_buf_size": 4096, 00:12:15.383 "enable_recv_pipe": true, 00:12:15.383 "enable_quickack": false, 00:12:15.383 "enable_placement_id": 0, 00:12:15.383 "enable_zerocopy_send_server": true, 00:12:15.383 "enable_zerocopy_send_client": false, 00:12:15.383 "zerocopy_threshold": 0, 00:12:15.383 "tls_version": 0, 00:12:15.383 "enable_ktls": false 00:12:15.383 } 00:12:15.383 }, 00:12:15.383 { 00:12:15.383 "method": "sock_impl_set_options", 00:12:15.383 "params": { 00:12:15.383 "impl_name": "posix", 00:12:15.383 "recv_buf_size": 2097152, 00:12:15.383 "send_buf_size": 2097152, 00:12:15.383 "enable_recv_pipe": true, 00:12:15.383 "enable_quickack": false, 00:12:15.383 "enable_placement_id": 0, 00:12:15.383 "enable_zerocopy_send_server": true, 00:12:15.383 "enable_zerocopy_send_client": false, 00:12:15.383 "zerocopy_threshold": 0, 00:12:15.383 "tls_version": 0, 00:12:15.383 "enable_ktls": false 00:12:15.383 } 00:12:15.383 }, 00:12:15.383 { 00:12:15.383 "method": "sock_impl_set_options", 00:12:15.383 "params": { 00:12:15.383 "impl_name": "uring", 00:12:15.383 "recv_buf_size": 2097152, 00:12:15.383 "send_buf_size": 2097152, 00:12:15.383 "enable_recv_pipe": true, 00:12:15.383 "enable_quickack": false, 00:12:15.383 "enable_placement_id": 0, 00:12:15.383 "enable_zerocopy_send_server": false, 00:12:15.383 "enable_zerocopy_send_client": false, 00:12:15.383 "zerocopy_threshold": 0, 00:12:15.383 "tls_version": 0, 00:12:15.383 "enable_ktls": false 00:12:15.383 } 00:12:15.383 } 00:12:15.383 ] 00:12:15.383 }, 00:12:15.383 { 00:12:15.383 "subsystem": "vmd", 00:12:15.383 "config": [] 00:12:15.383 }, 00:12:15.383 { 00:12:15.383 "subsystem": "accel", 00:12:15.384 "config": [ 00:12:15.384 { 00:12:15.384 "method": "accel_set_options", 00:12:15.384 "params": { 00:12:15.384 "small_cache_size": 128, 00:12:15.384 "large_cache_size": 16, 00:12:15.384 "task_count": 2048, 00:12:15.384 "sequence_count": 2048, 00:12:15.384 "buf_count": 2048 00:12:15.384 } 00:12:15.384 } 00:12:15.384 ] 00:12:15.384 }, 00:12:15.384 { 00:12:15.384 "subsystem": "bdev", 00:12:15.384 "config": [ 00:12:15.384 { 00:12:15.384 "method": "bdev_set_options", 00:12:15.384 "params": { 00:12:15.384 "bdev_io_pool_size": 65535, 00:12:15.384 "bdev_io_cache_size": 256, 00:12:15.384 "bdev_auto_examine": true, 00:12:15.384 "iobuf_small_cache_size": 128, 00:12:15.384 "iobuf_large_cache_size": 16 00:12:15.384 } 00:12:15.384 }, 00:12:15.384 { 00:12:15.384 "method": "bdev_raid_set_options", 00:12:15.384 "params": { 00:12:15.384 "process_window_size_kb": 1024, 00:12:15.384 "process_max_bandwidth_mb_sec": 0 00:12:15.384 } 00:12:15.384 }, 00:12:15.384 { 00:12:15.384 "method": "bdev_iscsi_set_options", 00:12:15.384 "params": { 00:12:15.384 "timeout_sec": 30 00:12:15.384 } 00:12:15.384 }, 00:12:15.384 { 00:12:15.384 "method": "bdev_nvme_set_options", 00:12:15.384 "params": { 00:12:15.384 "action_on_timeout": "none", 00:12:15.384 "timeout_us": 0, 00:12:15.384 "timeout_admin_us": 0, 00:12:15.384 "keep_alive_timeout_ms": 10000, 00:12:15.384 "arbitration_burst": 0, 00:12:15.384 "low_priority_weight": 0, 00:12:15.384 "medium_priority_weight": 0, 00:12:15.384 "high_priority_weight": 0, 00:12:15.384 "nvme_adminq_poll_period_us": 10000, 00:12:15.384 "nvme_ioq_poll_period_us": 0, 00:12:15.384 "io_queue_requests": 0, 00:12:15.384 "delay_cmd_submit": true, 00:12:15.384 "transport_retry_count": 4, 00:12:15.384 "bdev_retry_count": 3, 00:12:15.384 "transport_ack_timeout": 0, 00:12:15.384 "ctrlr_loss_timeout_sec": 0, 00:12:15.384 "reconnect_delay_sec": 0, 00:12:15.384 "fast_io_fail_timeout_sec": 0, 00:12:15.384 "disable_auto_failback": false, 00:12:15.384 "generate_uuids": false, 00:12:15.384 "transport_tos": 0, 00:12:15.384 "nvme_error_stat": false, 00:12:15.384 "rdma_srq_size": 0, 00:12:15.384 "io_path_stat": false, 00:12:15.384 "allow_accel_sequence": false, 00:12:15.384 "rdma_max_cq_size": 0, 00:12:15.384 "rdma_cm_event_timeout_ms": 0, 00:12:15.384 "dhchap_digests": [ 00:12:15.384 "sha256", 00:12:15.384 "sha384", 00:12:15.384 "sha512" 00:12:15.384 ], 00:12:15.384 "dhchap_dhgroups": [ 00:12:15.384 "null", 00:12:15.384 "ffdhe2048", 00:12:15.384 "ffdhe3072", 00:12:15.384 "ffdhe4096", 00:12:15.384 "ffdhe6144", 00:12:15.384 "ffdhe8192" 00:12:15.384 ] 00:12:15.384 } 00:12:15.384 }, 00:12:15.384 { 00:12:15.384 "method": "bdev_nvme_set_hotplug", 00:12:15.384 "params": { 00:12:15.384 "period_us": 100000, 00:12:15.384 "enable": false 00:12:15.384 } 00:12:15.384 }, 00:12:15.384 { 00:12:15.384 "method": "bdev_malloc_create", 00:12:15.384 "params": { 00:12:15.384 "name": "malloc0", 00:12:15.384 "num_blocks": 8192, 00:12:15.384 "block_size": 4096, 00:12:15.384 "physical_block_size": 4096, 00:12:15.384 "uuid": "8c1e7494-95b1-4105-a3a8-136f570b45c0", 00:12:15.384 "optimal_io_boundary": 0, 00:12:15.384 "md_size": 0, 00:12:15.384 "dif_type": 0, 00:12:15.384 "dif_is_head_of_md": false, 00:12:15.384 "dif_pi_format": 0 00:12:15.384 } 00:12:15.384 }, 00:12:15.384 { 00:12:15.384 "method": "bdev_wait_for_examine" 00:12:15.384 } 00:12:15.384 ] 00:12:15.384 }, 00:12:15.384 { 00:12:15.384 "subsystem": "nbd", 00:12:15.384 "config": [] 00:12:15.384 }, 00:12:15.384 { 00:12:15.384 "subsystem": "scheduler", 00:12:15.384 "config": [ 00:12:15.384 { 00:12:15.384 "method": "framework_set_scheduler", 00:12:15.384 "params": { 00:12:15.384 "name": "static" 00:12:15.384 } 00:12:15.384 } 00:12:15.384 ] 00:12:15.384 }, 00:12:15.384 { 00:12:15.384 "subsystem": "nvmf", 00:12:15.384 "config": [ 00:12:15.384 { 00:12:15.384 "method": "nvmf_set_config", 00:12:15.384 "params": { 00:12:15.384 "discovery_filter": "match_any", 00:12:15.384 "admin_cmd_passthru": { 00:12:15.384 "identify_ctrlr": false 00:12:15.384 }, 00:12:15.384 "dhchap_digests": [ 00:12:15.384 "sha256", 00:12:15.384 "sha384", 00:12:15.384 "sha512" 00:12:15.384 ], 00:12:15.384 "dhchap_dhgroups": [ 00:12:15.384 "null", 00:12:15.384 "ffdhe2048", 00:12:15.384 "ffdhe3072", 00:12:15.384 "ffdhe4096", 00:12:15.384 "ffdhe6144", 00:12:15.384 "ffdhe8192" 00:12:15.384 ] 00:12:15.384 } 00:12:15.384 }, 00:12:15.384 { 00:12:15.384 "method": "nvmf_set_max_subsystems", 00:12:15.384 "params": { 00:12:15.384 "max_subsystems": 1024 00:12:15.384 } 00:12:15.384 }, 00:12:15.384 { 00:12:15.384 "method": "nvmf_set_crdt", 00:12:15.384 "params": { 00:12:15.384 "crdt1": 0, 00:12:15.384 "crdt2": 0, 00:12:15.384 "crdt3": 0 00:12:15.384 } 00:12:15.384 }, 00:12:15.384 { 00:12:15.384 "method": "nvmf_create_transport", 00:12:15.384 "params": { 00:12:15.384 "trtype": "TCP", 00:12:15.384 "max_queue_depth": 128, 00:12:15.384 "max_io_qpairs_per_ctrlr": 127, 00:12:15.384 "in_capsule_data_size": 4096, 00:12:15.384 "max_io_size": 131072, 00:12:15.384 "io_unit_size": 131072, 00:12:15.384 "max_aq_depth": 128, 00:12:15.384 "num_shared_buffers": 511, 00:12:15.385 "buf_cache_size": 4294967295, 00:12:15.385 "dif_insert_or_strip": false, 00:12:15.385 "zcopy": false, 00:12:15.385 "c2h_success": false, 00:12:15.385 "sock_priority": 0, 00:12:15.385 "abort_timeout_sec": 1, 00:12:15.385 "ack_timeout": 0, 00:12:15.385 "data_wr_pool_size": 0 00:12:15.385 } 00:12:15.385 }, 00:12:15.385 { 00:12:15.385 "method": "nvmf_create_subsystem", 00:12:15.385 "params": { 00:12:15.385 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:15.385 "allow_any_host": false, 00:12:15.385 "serial_number": "SPDK00000000000001", 00:12:15.385 "model_number": "SPDK bdev Controller", 00:12:15.385 "max_namespaces": 10, 00:12:15.385 "min_cntlid": 1, 00:12:15.385 "max_cntlid": 65519, 00:12:15.385 "ana_reporting": false 00:12:15.385 } 00:12:15.385 }, 00:12:15.385 { 00:12:15.385 "method": "nvmf_subsystem_add_host", 00:12:15.385 "params": { 00:12:15.385 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:15.385 "host": "nqn.2016-06.io.spdk:host1", 00:12:15.385 "psk": "key0" 00:12:15.385 } 00:12:15.385 }, 00:12:15.385 { 00:12:15.385 "method": "nvmf_subsystem_add_ns", 00:12:15.385 "params": { 00:12:15.385 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:15.385 "namespace": { 00:12:15.385 "nsid": 1, 00:12:15.385 "bdev_name": "malloc0", 00:12:15.385 "nguid": "8C1E749495B14105A3A8136F570B45C0", 00:12:15.385 "uuid": "8c1e7494-95b1-4105-a3a8-136f570b45c0", 00:12:15.385 "no_auto_visible": false 00:12:15.385 } 00:12:15.385 } 00:12:15.385 }, 00:12:15.385 { 00:12:15.385 "method": "nvmf_subsystem_add_listener", 00:12:15.385 "params": { 00:12:15.385 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:15.385 "listen_address": { 00:12:15.385 "trtype": "TCP", 00:12:15.385 "adrfam": "IPv4", 00:12:15.385 "traddr": "10.0.0.3", 00:12:15.385 "trsvcid": "4420" 00:12:15.385 }, 00:12:15.385 "secure_channel": true 00:12:15.385 } 00:12:15.385 } 00:12:15.385 ] 00:12:15.385 } 00:12:15.385 ] 00:12:15.385 }' 00:12:15.385 19:46:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:12:15.643 19:46:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:12:15.643 "subsystems": [ 00:12:15.643 { 00:12:15.643 "subsystem": "keyring", 00:12:15.643 "config": [ 00:12:15.643 { 00:12:15.643 "method": "keyring_file_add_key", 00:12:15.643 "params": { 00:12:15.643 "name": "key0", 00:12:15.643 "path": "/tmp/tmp.ZiBJzuZ0GM" 00:12:15.643 } 00:12:15.643 } 00:12:15.643 ] 00:12:15.643 }, 00:12:15.643 { 00:12:15.643 "subsystem": "iobuf", 00:12:15.643 "config": [ 00:12:15.643 { 00:12:15.643 "method": "iobuf_set_options", 00:12:15.643 "params": { 00:12:15.643 "small_pool_count": 8192, 00:12:15.643 "large_pool_count": 1024, 00:12:15.643 "small_bufsize": 8192, 00:12:15.643 "large_bufsize": 135168, 00:12:15.643 "enable_numa": false 00:12:15.643 } 00:12:15.643 } 00:12:15.643 ] 00:12:15.643 }, 00:12:15.643 { 00:12:15.643 "subsystem": "sock", 00:12:15.643 "config": [ 00:12:15.643 { 00:12:15.643 "method": "sock_set_default_impl", 00:12:15.643 "params": { 00:12:15.643 "impl_name": "uring" 00:12:15.643 } 00:12:15.643 }, 00:12:15.643 { 00:12:15.643 "method": "sock_impl_set_options", 00:12:15.643 "params": { 00:12:15.643 "impl_name": "ssl", 00:12:15.643 "recv_buf_size": 4096, 00:12:15.643 "send_buf_size": 4096, 00:12:15.643 "enable_recv_pipe": true, 00:12:15.643 "enable_quickack": false, 00:12:15.643 "enable_placement_id": 0, 00:12:15.644 "enable_zerocopy_send_server": true, 00:12:15.644 "enable_zerocopy_send_client": false, 00:12:15.644 "zerocopy_threshold": 0, 00:12:15.644 "tls_version": 0, 00:12:15.644 "enable_ktls": false 00:12:15.644 } 00:12:15.644 }, 00:12:15.644 { 00:12:15.644 "method": "sock_impl_set_options", 00:12:15.644 "params": { 00:12:15.644 "impl_name": "posix", 00:12:15.644 "recv_buf_size": 2097152, 00:12:15.644 "send_buf_size": 2097152, 00:12:15.644 "enable_recv_pipe": true, 00:12:15.644 "enable_quickack": false, 00:12:15.644 "enable_placement_id": 0, 00:12:15.644 "enable_zerocopy_send_server": true, 00:12:15.644 "enable_zerocopy_send_client": false, 00:12:15.644 "zerocopy_threshold": 0, 00:12:15.644 "tls_version": 0, 00:12:15.644 "enable_ktls": false 00:12:15.644 } 00:12:15.644 }, 00:12:15.644 { 00:12:15.644 "method": "sock_impl_set_options", 00:12:15.644 "params": { 00:12:15.644 "impl_name": "uring", 00:12:15.644 "recv_buf_size": 2097152, 00:12:15.644 "send_buf_size": 2097152, 00:12:15.644 "enable_recv_pipe": true, 00:12:15.644 "enable_quickack": false, 00:12:15.644 "enable_placement_id": 0, 00:12:15.644 "enable_zerocopy_send_server": false, 00:12:15.644 "enable_zerocopy_send_client": false, 00:12:15.644 "zerocopy_threshold": 0, 00:12:15.644 "tls_version": 0, 00:12:15.644 "enable_ktls": false 00:12:15.644 } 00:12:15.644 } 00:12:15.644 ] 00:12:15.644 }, 00:12:15.644 { 00:12:15.644 "subsystem": "vmd", 00:12:15.644 "config": [] 00:12:15.644 }, 00:12:15.644 { 00:12:15.644 "subsystem": "accel", 00:12:15.644 "config": [ 00:12:15.644 { 00:12:15.644 "method": "accel_set_options", 00:12:15.644 "params": { 00:12:15.644 "small_cache_size": 128, 00:12:15.644 "large_cache_size": 16, 00:12:15.644 "task_count": 2048, 00:12:15.644 "sequence_count": 2048, 00:12:15.644 "buf_count": 2048 00:12:15.644 } 00:12:15.644 } 00:12:15.644 ] 00:12:15.644 }, 00:12:15.644 { 00:12:15.644 "subsystem": "bdev", 00:12:15.644 "config": [ 00:12:15.644 { 00:12:15.644 "method": "bdev_set_options", 00:12:15.644 "params": { 00:12:15.644 "bdev_io_pool_size": 65535, 00:12:15.644 "bdev_io_cache_size": 256, 00:12:15.644 "bdev_auto_examine": true, 00:12:15.644 "iobuf_small_cache_size": 128, 00:12:15.644 "iobuf_large_cache_size": 16 00:12:15.644 } 00:12:15.644 }, 00:12:15.644 { 00:12:15.644 "method": "bdev_raid_set_options", 00:12:15.644 "params": { 00:12:15.644 "process_window_size_kb": 1024, 00:12:15.644 "process_max_bandwidth_mb_sec": 0 00:12:15.644 } 00:12:15.644 }, 00:12:15.644 { 00:12:15.644 "method": "bdev_iscsi_set_options", 00:12:15.644 "params": { 00:12:15.644 "timeout_sec": 30 00:12:15.644 } 00:12:15.644 }, 00:12:15.644 { 00:12:15.644 "method": "bdev_nvme_set_options", 00:12:15.644 "params": { 00:12:15.644 "action_on_timeout": "none", 00:12:15.644 "timeout_us": 0, 00:12:15.644 "timeout_admin_us": 0, 00:12:15.644 "keep_alive_timeout_ms": 10000, 00:12:15.644 "arbitration_burst": 0, 00:12:15.644 "low_priority_weight": 0, 00:12:15.644 "medium_priority_weight": 0, 00:12:15.644 "high_priority_weight": 0, 00:12:15.644 "nvme_adminq_poll_period_us": 10000, 00:12:15.644 "nvme_ioq_poll_period_us": 0, 00:12:15.644 "io_queue_requests": 512, 00:12:15.644 "delay_cmd_submit": true, 00:12:15.644 "transport_retry_count": 4, 00:12:15.644 "bdev_retry_count": 3, 00:12:15.644 "transport_ack_timeout": 0, 00:12:15.644 "ctrlr_loss_timeout_sec": 0, 00:12:15.644 "reconnect_delay_sec": 0, 00:12:15.644 "fast_io_fail_timeout_sec": 0, 00:12:15.644 "disable_auto_failback": false, 00:12:15.644 "generate_uuids": false, 00:12:15.644 "transport_tos": 0, 00:12:15.644 "nvme_error_stat": false, 00:12:15.644 "rdma_srq_size": 0, 00:12:15.644 "io_path_stat": false, 00:12:15.644 "allow_accel_sequence": false, 00:12:15.644 "rdma_max_cq_size": 0, 00:12:15.644 "rdma_cm_event_timeout_ms": 0, 00:12:15.644 "dhchap_digests": [ 00:12:15.644 "sha256", 00:12:15.644 "sha384", 00:12:15.644 "sha512" 00:12:15.644 ], 00:12:15.644 "dhchap_dhgroups": [ 00:12:15.644 "null", 00:12:15.644 "ffdhe2048", 00:12:15.644 "ffdhe3072", 00:12:15.644 "ffdhe4096", 00:12:15.644 "ffdhe6144", 00:12:15.644 "ffdhe8192" 00:12:15.644 ] 00:12:15.644 } 00:12:15.644 }, 00:12:15.644 { 00:12:15.644 "method": "bdev_nvme_attach_controller", 00:12:15.644 "params": { 00:12:15.644 "name": "TLSTEST", 00:12:15.644 "trtype": "TCP", 00:12:15.644 "adrfam": "IPv4", 00:12:15.644 "traddr": "10.0.0.3", 00:12:15.644 "trsvcid": "4420", 00:12:15.644 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:15.644 "prchk_reftag": false, 00:12:15.644 "prchk_guard": false, 00:12:15.644 "ctrlr_loss_timeout_sec": 0, 00:12:15.644 "reconnect_delay_sec": 0, 00:12:15.644 "fast_io_fail_timeout_sec": 0, 00:12:15.644 "psk": "key0", 00:12:15.644 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:15.644 "hdgst": false, 00:12:15.644 "ddgst": false, 00:12:15.644 "multipath": "multipath" 00:12:15.644 } 00:12:15.644 }, 00:12:15.644 { 00:12:15.644 "method": "bdev_nvme_set_hotplug", 00:12:15.644 "params": { 00:12:15.644 "period_us": 100000, 00:12:15.644 "enable": false 00:12:15.644 } 00:12:15.644 }, 00:12:15.644 { 00:12:15.644 "method": "bdev_wait_for_examine" 00:12:15.644 } 00:12:15.644 ] 00:12:15.644 }, 00:12:15.644 { 00:12:15.644 "subsystem": "nbd", 00:12:15.644 "config": [] 00:12:15.644 } 00:12:15.644 ] 00:12:15.644 }' 00:12:15.644 19:46:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 70619 00:12:15.644 19:46:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 70619 ']' 00:12:15.644 19:46:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 70619 00:12:15.644 19:46:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:15.644 19:46:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:15.644 19:46:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70619 00:12:15.644 19:46:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:12:15.644 19:46:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:12:15.644 killing process with pid 70619 00:12:15.644 19:46:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70619' 00:12:15.644 19:46:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 70619 00:12:15.644 Received shutdown signal, test time was about 10.000000 seconds 00:12:15.644 00:12:15.644 Latency(us) 00:12:15.644 [2024-11-26T19:46:10.891Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:15.644 [2024-11-26T19:46:10.891Z] =================================================================================================================== 00:12:15.644 [2024-11-26T19:46:10.891Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:15.644 19:46:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 70619 00:12:15.644 19:46:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 70569 00:12:15.644 19:46:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 70569 ']' 00:12:15.644 19:46:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 70569 00:12:15.644 19:46:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:15.644 19:46:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:15.644 19:46:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70569 00:12:15.903 19:46:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:15.903 19:46:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:15.903 killing process with pid 70569 00:12:15.903 19:46:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70569' 00:12:15.903 19:46:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 70569 00:12:15.903 19:46:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 70569 00:12:15.903 19:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:12:15.903 19:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:15.903 19:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:15.903 19:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:15.903 19:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:12:15.903 "subsystems": [ 00:12:15.903 { 00:12:15.903 "subsystem": "keyring", 00:12:15.903 "config": [ 00:12:15.903 { 00:12:15.903 "method": "keyring_file_add_key", 00:12:15.903 "params": { 00:12:15.903 "name": "key0", 00:12:15.903 "path": "/tmp/tmp.ZiBJzuZ0GM" 00:12:15.903 } 00:12:15.903 } 00:12:15.903 ] 00:12:15.903 }, 00:12:15.903 { 00:12:15.903 "subsystem": "iobuf", 00:12:15.903 "config": [ 00:12:15.903 { 00:12:15.903 "method": "iobuf_set_options", 00:12:15.903 "params": { 00:12:15.903 "small_pool_count": 8192, 00:12:15.903 "large_pool_count": 1024, 00:12:15.903 "small_bufsize": 8192, 00:12:15.903 "large_bufsize": 135168, 00:12:15.903 "enable_numa": false 00:12:15.903 } 00:12:15.903 } 00:12:15.903 ] 00:12:15.903 }, 00:12:15.903 { 00:12:15.903 "subsystem": "sock", 00:12:15.903 "config": [ 00:12:15.903 { 00:12:15.903 "method": "sock_set_default_impl", 00:12:15.903 "params": { 00:12:15.903 "impl_name": "uring" 00:12:15.903 } 00:12:15.903 }, 00:12:15.903 { 00:12:15.903 "method": "sock_impl_set_options", 00:12:15.903 "params": { 00:12:15.903 "impl_name": "ssl", 00:12:15.903 "recv_buf_size": 4096, 00:12:15.903 "send_buf_size": 4096, 00:12:15.903 "enable_recv_pipe": true, 00:12:15.903 "enable_quickack": false, 00:12:15.903 "enable_placement_id": 0, 00:12:15.903 "enable_zerocopy_send_server": true, 00:12:15.903 "enable_zerocopy_send_client": false, 00:12:15.903 "zerocopy_threshold": 0, 00:12:15.903 "tls_version": 0, 00:12:15.903 "enable_ktls": false 00:12:15.903 } 00:12:15.903 }, 00:12:15.903 { 00:12:15.903 "method": "sock_impl_set_options", 00:12:15.903 "params": { 00:12:15.903 "impl_name": "posix", 00:12:15.903 "recv_buf_size": 2097152, 00:12:15.903 "send_buf_size": 2097152, 00:12:15.903 "enable_recv_pipe": true, 00:12:15.903 "enable_quickack": false, 00:12:15.903 "enable_placement_id": 0, 00:12:15.903 "enable_zerocopy_send_server": true, 00:12:15.903 "enable_zerocopy_send_client": false, 00:12:15.903 "zerocopy_threshold": 0, 00:12:15.903 "tls_version": 0, 00:12:15.903 "enable_ktls": false 00:12:15.903 } 00:12:15.903 }, 00:12:15.903 { 00:12:15.903 "method": "sock_impl_set_options", 00:12:15.903 "params": { 00:12:15.903 "impl_name": "uring", 00:12:15.903 "recv_buf_size": 2097152, 00:12:15.903 "send_buf_size": 2097152, 00:12:15.903 "enable_recv_pipe": true, 00:12:15.903 "enable_quickack": false, 00:12:15.903 "enable_placement_id": 0, 00:12:15.903 "enable_zerocopy_send_server": false, 00:12:15.903 "enable_zerocopy_send_client": false, 00:12:15.903 "zerocopy_threshold": 0, 00:12:15.903 "tls_version": 0, 00:12:15.903 "enable_ktls": false 00:12:15.903 } 00:12:15.903 } 00:12:15.903 ] 00:12:15.903 }, 00:12:15.903 { 00:12:15.903 "subsystem": "vmd", 00:12:15.903 "config": [] 00:12:15.903 }, 00:12:15.903 { 00:12:15.903 "subsystem": "accel", 00:12:15.903 "config": [ 00:12:15.903 { 00:12:15.903 "method": "accel_set_options", 00:12:15.903 "params": { 00:12:15.903 "small_cache_size": 128, 00:12:15.903 "large_cache_size": 16, 00:12:15.903 "task_count": 2048, 00:12:15.903 "sequence_count": 2048, 00:12:15.903 "buf_count": 2048 00:12:15.903 } 00:12:15.903 } 00:12:15.903 ] 00:12:15.903 }, 00:12:15.903 { 00:12:15.903 "subsystem": "bdev", 00:12:15.904 "config": [ 00:12:15.904 { 00:12:15.904 "method": "bdev_set_options", 00:12:15.904 "params": { 00:12:15.904 "bdev_io_pool_size": 65535, 00:12:15.904 "bdev_io_cache_size": 256, 00:12:15.904 "bdev_auto_examine": true, 00:12:15.904 "iobuf_small_cache_size": 128, 00:12:15.904 "iobuf_large_cache_size": 16 00:12:15.904 } 00:12:15.904 }, 00:12:15.904 { 00:12:15.904 "method": "bdev_raid_set_options", 00:12:15.904 "params": { 00:12:15.904 "process_window_size_kb": 1024, 00:12:15.904 "process_max_bandwidth_mb_sec": 0 00:12:15.904 } 00:12:15.904 }, 00:12:15.904 { 00:12:15.904 "method": "bdev_iscsi_set_options", 00:12:15.904 "params": { 00:12:15.904 "timeout_sec": 30 00:12:15.904 } 00:12:15.904 }, 00:12:15.904 { 00:12:15.904 "method": "bdev_nvme_set_options", 00:12:15.904 "params": { 00:12:15.904 "action_on_timeout": "none", 00:12:15.904 "timeout_us": 0, 00:12:15.904 "timeout_admin_us": 0, 00:12:15.904 "keep_alive_timeout_ms": 10000, 00:12:15.904 "arbitration_burst": 0, 00:12:15.904 "low_priority_weight": 0, 00:12:15.904 "medium_priority_weight": 0, 00:12:15.904 "high_priority_weight": 0, 00:12:15.904 "nvme_adminq_poll_period_us": 10000, 00:12:15.904 "nvme_ioq_poll_period_us": 0, 00:12:15.904 "io_queue_requests": 0, 00:12:15.904 "delay_cmd_submit": true, 00:12:15.904 "transport_retry_count": 4, 00:12:15.904 "bdev_retry_count": 3, 00:12:15.904 "transport_ack_timeout": 0, 00:12:15.904 "ctrlr_loss_timeout_sec": 0, 00:12:15.904 "reconnect_delay_sec": 0, 00:12:15.904 "fast_io_fail_timeout_sec": 0, 00:12:15.904 "disable_auto_failback": false, 00:12:15.904 "generate_uuids": false, 00:12:15.904 "transport_tos": 0, 00:12:15.904 "nvme_error_stat": false, 00:12:15.904 "rdma_srq_size": 0, 00:12:15.904 "io_path_stat": false, 00:12:15.904 "allow_accel_sequence": false, 00:12:15.904 "rdma_max_cq_size": 0, 00:12:15.904 "rdma_cm_event_timeout_ms": 0, 00:12:15.904 "dhchap_digests": [ 00:12:15.904 "sha256", 00:12:15.904 "sha384", 00:12:15.904 "sha512" 00:12:15.904 ], 00:12:15.904 "dhchap_dhgroups": [ 00:12:15.904 "null", 00:12:15.904 "ffdhe2048", 00:12:15.904 "ffdhe3072", 00:12:15.904 "ffdhe4096", 00:12:15.904 "ffdhe6144", 00:12:15.904 "ffdhe8192" 00:12:15.904 ] 00:12:15.904 } 00:12:15.904 }, 00:12:15.904 { 00:12:15.904 "method": "bdev_nvme_set_hotplug", 00:12:15.904 "params": { 00:12:15.904 "period_us": 100000, 00:12:15.904 "enable": false 00:12:15.904 } 00:12:15.904 }, 00:12:15.904 { 00:12:15.904 "method": "bdev_malloc_create", 00:12:15.904 "params": { 00:12:15.904 "name": "malloc0", 00:12:15.904 "num_blocks": 8192, 00:12:15.904 "block_size": 4096, 00:12:15.904 "physical_block_size": 4096, 00:12:15.904 "uuid": "8c1e7494-95b1-4105-a3a8-136f570b45c0", 00:12:15.904 "optimal_io_boundary": 0, 00:12:15.904 "md_size": 0, 00:12:15.904 "dif_type": 0, 00:12:15.904 "dif_is_head_of_md": false, 00:12:15.904 "dif_pi_format": 0 00:12:15.904 } 00:12:15.904 }, 00:12:15.904 { 00:12:15.904 "method": "bdev_wait_for_examine" 00:12:15.904 } 00:12:15.904 ] 00:12:15.904 }, 00:12:15.904 { 00:12:15.904 "subsystem": "nbd", 00:12:15.904 "config": [] 00:12:15.904 }, 00:12:15.904 { 00:12:15.904 "subsystem": "scheduler", 00:12:15.904 "config": [ 00:12:15.904 { 00:12:15.904 "method": "framework_set_scheduler", 00:12:15.904 "params": { 00:12:15.904 "name": "static" 00:12:15.904 } 00:12:15.904 } 00:12:15.904 ] 00:12:15.904 }, 00:12:15.904 { 00:12:15.904 "subsystem": "nvmf", 00:12:15.904 "config": [ 00:12:15.904 { 00:12:15.904 "method": "nvmf_set_config", 00:12:15.904 "params": { 00:12:15.904 "discovery_filter": "match_any", 00:12:15.904 "admin_cmd_passthru": { 00:12:15.904 "identify_ctrlr": false 00:12:15.904 }, 00:12:15.904 "dhchap_digests": [ 00:12:15.904 "sha256", 00:12:15.904 "sha384", 00:12:15.904 "sha512" 00:12:15.904 ], 00:12:15.904 "dhchap_dhgroups": [ 00:12:15.904 "null", 00:12:15.904 "ffdhe2048", 00:12:15.904 "ffdhe3072", 00:12:15.904 "ffdhe4096", 00:12:15.904 "ffdhe6144", 00:12:15.904 "ffdhe8192" 00:12:15.904 ] 00:12:15.904 } 00:12:15.904 }, 00:12:15.904 { 00:12:15.904 "method": "nvmf_set_max_subsystems", 00:12:15.904 "params": { 00:12:15.904 "max_subsystems": 1024 00:12:15.904 } 00:12:15.904 }, 00:12:15.904 { 00:12:15.904 "method": "nvmf_set_crdt", 00:12:15.904 "params": { 00:12:15.904 "crdt1": 0, 00:12:15.904 "crdt2": 0, 00:12:15.904 "crdt3": 0 00:12:15.904 } 00:12:15.904 }, 00:12:15.904 { 00:12:15.904 "method": "nvmf_create_transport", 00:12:15.904 "params": { 00:12:15.904 "trtype": "TCP", 00:12:15.904 "max_queue_depth": 128, 00:12:15.904 "max_io_qpairs_per_ctrlr": 127, 00:12:15.904 "in_capsule_data_size": 4096, 00:12:15.904 "max_io_size": 131072, 00:12:15.904 "io_unit_size": 131072, 00:12:15.904 "max_aq_depth": 128, 00:12:15.904 "num_shared_buffers": 511, 00:12:15.904 "buf_cache_size": 4294967295, 00:12:15.904 "dif_insert_or_strip": false, 00:12:15.904 "zcopy": false, 00:12:15.904 "c2h_success": false, 00:12:15.904 "sock_priority": 0, 00:12:15.904 "abort_timeout_sec": 1, 00:12:15.904 "ack_timeout": 0, 00:12:15.904 "data_wr_pool_size": 0 00:12:15.904 } 00:12:15.904 }, 00:12:15.904 { 00:12:15.904 "method": "nvmf_create_subsystem", 00:12:15.904 "params": { 00:12:15.904 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:15.904 "allow_any_host": false, 00:12:15.904 "serial_number": "SPDK00000000000001", 00:12:15.904 "model_number": "SPDK bdev Controller", 00:12:15.904 "max_namespaces": 10, 00:12:15.904 "min_cntlid": 1, 00:12:15.904 "max_cntlid": 65519, 00:12:15.904 "ana_reporting": false 00:12:15.904 } 00:12:15.904 }, 00:12:15.904 { 00:12:15.904 "method": "nvmf_subsystem_add_host", 00:12:15.904 "params": { 00:12:15.904 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:15.904 "host": "nqn.2016-06.io.spdk:host1", 00:12:15.904 "psk": "key0" 00:12:15.904 } 00:12:15.904 }, 00:12:15.904 { 00:12:15.904 "method": "nvmf_subsystem_add_ns", 00:12:15.904 "params": { 00:12:15.904 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:15.904 "namespace": { 00:12:15.904 "nsid": 1, 00:12:15.904 "bdev_name": "malloc0", 00:12:15.904 "nguid": "8C1E749495B14105A3A8136F570B45C0", 00:12:15.904 "uuid": "8c1e7494-95b1-4105-a3a8-136f570b45c0", 00:12:15.904 "no_auto_visible": false 00:12:15.904 } 00:12:15.904 } 00:12:15.904 }, 00:12:15.904 { 00:12:15.904 "method": "nvmf_subsystem_add_listener", 00:12:15.904 "params": { 00:12:15.904 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:15.904 "listen_address": { 00:12:15.904 "trtype": "TCP", 00:12:15.904 "adrfam": "IPv4", 00:12:15.904 "traddr": "10.0.0.3", 00:12:15.904 "trsvcid": "4420" 00:12:15.904 }, 00:12:15.904 "secure_channel": true 00:12:15.904 } 00:12:15.904 } 00:12:15.904 ] 00:12:15.904 } 00:12:15.904 ] 00:12:15.904 }' 00:12:15.905 19:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=70663 00:12:15.905 19:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 70663 00:12:15.905 19:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 70663 ']' 00:12:15.905 19:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:12:15.905 19:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.905 19:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:15.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.905 19:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.905 19:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:15.905 19:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:15.905 [2024-11-26 19:46:11.043895] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:12:15.905 [2024-11-26 19:46:11.043960] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:16.162 [2024-11-26 19:46:11.174359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.162 [2024-11-26 19:46:11.206662] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:16.162 [2024-11-26 19:46:11.206705] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:16.162 [2024-11-26 19:46:11.206711] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:16.162 [2024-11-26 19:46:11.206715] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:16.162 [2024-11-26 19:46:11.206718] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:16.162 [2024-11-26 19:46:11.207023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:16.162 [2024-11-26 19:46:11.348468] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:16.420 [2024-11-26 19:46:11.410432] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:16.420 [2024-11-26 19:46:11.442355] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:16.420 [2024-11-26 19:46:11.442507] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:16.690 19:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:16.691 19:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:16.691 19:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:16.691 19:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:16.691 19:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:16.691 19:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:16.691 19:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=70695 00:12:16.691 19:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 70695 /var/tmp/bdevperf.sock 00:12:16.691 19:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 70695 ']' 00:12:16.691 19:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:16.691 19:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:16.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:16.691 19:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:16.691 19:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:16.691 19:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:16.691 19:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:12:16.691 "subsystems": [ 00:12:16.691 { 00:12:16.691 "subsystem": "keyring", 00:12:16.691 "config": [ 00:12:16.691 { 00:12:16.691 "method": "keyring_file_add_key", 00:12:16.691 "params": { 00:12:16.691 "name": "key0", 00:12:16.691 "path": "/tmp/tmp.ZiBJzuZ0GM" 00:12:16.691 } 00:12:16.691 } 00:12:16.691 ] 00:12:16.691 }, 00:12:16.691 { 00:12:16.691 "subsystem": "iobuf", 00:12:16.691 "config": [ 00:12:16.691 { 00:12:16.691 "method": "iobuf_set_options", 00:12:16.691 "params": { 00:12:16.691 "small_pool_count": 8192, 00:12:16.691 "large_pool_count": 1024, 00:12:16.691 "small_bufsize": 8192, 00:12:16.691 "large_bufsize": 135168, 00:12:16.691 "enable_numa": false 00:12:16.691 } 00:12:16.691 } 00:12:16.691 ] 00:12:16.691 }, 00:12:16.691 { 00:12:16.691 "subsystem": "sock", 00:12:16.691 "config": [ 00:12:16.691 { 00:12:16.691 "method": "sock_set_default_impl", 00:12:16.691 "params": { 00:12:16.691 "impl_name": "uring" 00:12:16.691 } 00:12:16.691 }, 00:12:16.691 { 00:12:16.691 "method": "sock_impl_set_options", 00:12:16.691 "params": { 00:12:16.691 "impl_name": "ssl", 00:12:16.691 "recv_buf_size": 4096, 00:12:16.691 "send_buf_size": 4096, 00:12:16.691 "enable_recv_pipe": true, 00:12:16.691 "enable_quickack": false, 00:12:16.691 "enable_placement_id": 0, 00:12:16.691 "enable_zerocopy_send_server": true, 00:12:16.691 "enable_zerocopy_send_client": false, 00:12:16.691 "zerocopy_threshold": 0, 00:12:16.691 "tls_version": 0, 00:12:16.691 "enable_ktls": false 00:12:16.691 } 00:12:16.691 }, 00:12:16.691 { 00:12:16.691 "method": "sock_impl_set_options", 00:12:16.691 "params": { 00:12:16.691 "impl_name": "posix", 00:12:16.691 "recv_buf_size": 2097152, 00:12:16.691 "send_buf_size": 2097152, 00:12:16.691 "enable_recv_pipe": true, 00:12:16.691 "enable_quickack": false, 00:12:16.691 "enable_placement_id": 0, 00:12:16.691 "enable_zerocopy_send_server": true, 00:12:16.691 "enable_zerocopy_send_client": false, 00:12:16.691 "zerocopy_threshold": 0, 00:12:16.691 "tls_version": 0, 00:12:16.691 "enable_ktls": false 00:12:16.691 } 00:12:16.691 }, 00:12:16.691 { 00:12:16.691 "method": "sock_impl_set_options", 00:12:16.691 "params": { 00:12:16.691 "impl_name": "uring", 00:12:16.691 "recv_buf_size": 2097152, 00:12:16.691 "send_buf_size": 2097152, 00:12:16.691 "enable_recv_pipe": true, 00:12:16.691 "enable_quickack": false, 00:12:16.691 "enable_placement_id": 0, 00:12:16.691 "enable_zerocopy_send_server": false, 00:12:16.691 "enable_zerocopy_send_client": false, 00:12:16.691 "zerocopy_threshold": 0, 00:12:16.691 "tls_version": 0, 00:12:16.691 "enable_ktls": false 00:12:16.691 } 00:12:16.691 } 00:12:16.691 ] 00:12:16.691 }, 00:12:16.691 { 00:12:16.691 "subsystem": "vmd", 00:12:16.691 "config": [] 00:12:16.691 }, 00:12:16.691 { 00:12:16.691 "subsystem": "accel", 00:12:16.691 "config": [ 00:12:16.691 { 00:12:16.691 "method": "accel_set_options", 00:12:16.691 "params": { 00:12:16.691 "small_cache_size": 128, 00:12:16.691 "large_cache_size": 16, 00:12:16.691 "task_count": 2048, 00:12:16.691 "sequence_count": 2048, 00:12:16.691 "buf_count": 2048 00:12:16.691 } 00:12:16.691 } 00:12:16.691 ] 00:12:16.691 }, 00:12:16.691 { 00:12:16.691 "subsystem": "bdev", 00:12:16.691 "config": [ 00:12:16.691 { 00:12:16.691 "method": "bdev_set_options", 00:12:16.691 "params": { 00:12:16.691 "bdev_io_pool_size": 65535, 00:12:16.691 "bdev_io_cache_size": 256, 00:12:16.691 "bdev_auto_examine": true, 00:12:16.691 "iobuf_small_cache_size": 128, 00:12:16.691 "iobuf_large_cache_size": 16 00:12:16.691 } 00:12:16.691 }, 00:12:16.691 { 00:12:16.691 "method": "bdev_raid_set_options", 00:12:16.691 "params": { 00:12:16.691 "process_window_size_kb": 1024, 00:12:16.691 "process_max_bandwidth_mb_sec": 0 00:12:16.691 } 00:12:16.691 }, 00:12:16.691 { 00:12:16.691 "method": "bdev_iscsi_set_options", 00:12:16.691 "params": { 00:12:16.691 "timeout_sec": 30 00:12:16.691 } 00:12:16.691 }, 00:12:16.691 { 00:12:16.691 "method": "bdev_nvme_set_options", 00:12:16.691 "params": { 00:12:16.691 "action_on_timeout": "none", 00:12:16.691 "timeout_us": 0, 00:12:16.691 "timeout_admin_us": 0, 00:12:16.691 "keep_alive_timeout_ms": 10000, 00:12:16.691 "arbitration_burst": 0, 00:12:16.691 "low_priority_weight": 0, 00:12:16.691 "medium_priority_weight": 0, 00:12:16.691 "high_priority_weight": 0, 00:12:16.691 "nvme_adminq_poll_period_us": 10000, 00:12:16.691 "nvme_ioq_poll_period_us": 0, 00:12:16.691 "io_queue_requests": 512, 00:12:16.691 "delay_cmd_submit": true, 00:12:16.691 "transport_retry_count": 4, 00:12:16.691 "bdev_retry_count": 3, 00:12:16.691 "transport_ack_timeout": 0, 00:12:16.691 "ctrlr_loss_timeout_sec": 0, 00:12:16.691 "reconnect_delay_sec": 0, 00:12:16.691 "fast_io_fail_timeout_sec": 0, 00:12:16.691 "disable_auto_failback": false, 00:12:16.691 "generate_uuids": false, 00:12:16.691 "transport_tos": 0, 00:12:16.691 "nvme_error_stat": false, 00:12:16.691 "rdma_srq_size": 0, 00:12:16.691 "io_path_stat": false, 00:12:16.691 "allow_accel_sequence": false, 00:12:16.691 "rdma_max_cq_size": 0, 00:12:16.691 "rdma_cm_event_timeout_ms": 0, 00:12:16.691 "dhchap_digests": [ 00:12:16.691 "sha256", 00:12:16.691 "sha384", 00:12:16.691 "sha512" 00:12:16.691 ], 00:12:16.691 "dhchap_dhgroups": [ 00:12:16.691 "null", 00:12:16.691 "ffdhe2048", 00:12:16.691 "ffdhe3072", 00:12:16.691 "ffdhe4096", 00:12:16.691 "ffdhe6144", 00:12:16.691 "ffdhe8192" 00:12:16.691 ] 00:12:16.691 } 00:12:16.691 }, 00:12:16.691 { 00:12:16.691 "method": "bdev_nvme_attach_controller", 00:12:16.691 "params": { 00:12:16.691 "name": "TLSTEST", 00:12:16.691 "trtype": "TCP", 00:12:16.691 "adrfam": "IPv4", 00:12:16.691 "traddr": "10.0.0.3", 00:12:16.691 "trsvcid": "4420", 00:12:16.691 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:16.691 "prchk_reftag": false, 00:12:16.691 "prchk_guard": false, 00:12:16.691 "ctrlr_loss_timeout_sec": 0, 00:12:16.691 "reconnect_delay_sec": 0, 00:12:16.691 "fast_io_fail_timeout_sec": 0, 00:12:16.691 "psk": "key0", 00:12:16.691 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:16.691 "hdgst": false, 00:12:16.691 "ddgst": false, 00:12:16.691 "multipath": "multipath" 00:12:16.691 } 00:12:16.691 }, 00:12:16.691 { 00:12:16.691 "method": "bdev_nvme_set_hotplug", 00:12:16.691 "params": { 00:12:16.691 "period_us": 100000, 00:12:16.691 "enable": false 00:12:16.691 } 00:12:16.691 }, 00:12:16.691 { 00:12:16.692 "method": "bdev_wait_for_examine" 00:12:16.692 } 00:12:16.692 ] 00:12:16.692 }, 00:12:16.692 { 00:12:16.692 "subsystem": "nbd", 00:12:16.692 "config": [] 00:12:16.692 } 00:12:16.692 ] 00:12:16.692 }' 00:12:16.692 19:46:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:12:16.949 [2024-11-26 19:46:11.949582] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:12:16.949 [2024-11-26 19:46:11.949651] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70695 ] 00:12:16.949 [2024-11-26 19:46:12.091131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.949 [2024-11-26 19:46:12.129191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:17.207 [2024-11-26 19:46:12.241970] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:17.207 [2024-11-26 19:46:12.284635] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:17.772 19:46:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:17.772 19:46:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:17.772 19:46:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:12:17.772 Running I/O for 10 seconds... 00:12:20.080 6279.00 IOPS, 24.53 MiB/s [2024-11-26T19:46:16.263Z] 6673.00 IOPS, 26.07 MiB/s [2024-11-26T19:46:17.196Z] 6790.67 IOPS, 26.53 MiB/s [2024-11-26T19:46:18.130Z] 6819.75 IOPS, 26.64 MiB/s [2024-11-26T19:46:19.063Z] 6825.60 IOPS, 26.66 MiB/s [2024-11-26T19:46:19.999Z] 6835.83 IOPS, 26.70 MiB/s [2024-11-26T19:46:20.932Z] 6862.71 IOPS, 26.81 MiB/s [2024-11-26T19:46:22.306Z] 6874.88 IOPS, 26.85 MiB/s [2024-11-26T19:46:23.238Z] 6895.00 IOPS, 26.93 MiB/s [2024-11-26T19:46:23.238Z] 6916.30 IOPS, 27.02 MiB/s 00:12:27.991 Latency(us) 00:12:27.991 [2024-11-26T19:46:23.238Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:27.991 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:27.991 Verification LBA range: start 0x0 length 0x2000 00:12:27.991 TLSTESTn1 : 10.01 6922.17 27.04 0.00 0.00 18461.37 3478.45 16131.94 00:12:27.991 [2024-11-26T19:46:23.238Z] =================================================================================================================== 00:12:27.991 [2024-11-26T19:46:23.238Z] Total : 6922.17 27.04 0.00 0.00 18461.37 3478.45 16131.94 00:12:27.991 { 00:12:27.991 "results": [ 00:12:27.991 { 00:12:27.991 "job": "TLSTESTn1", 00:12:27.991 "core_mask": "0x4", 00:12:27.991 "workload": "verify", 00:12:27.991 "status": "finished", 00:12:27.991 "verify_range": { 00:12:27.992 "start": 0, 00:12:27.992 "length": 8192 00:12:27.992 }, 00:12:27.992 "queue_depth": 128, 00:12:27.992 "io_size": 4096, 00:12:27.992 "runtime": 10.009871, 00:12:27.992 "iops": 6922.167128827135, 00:12:27.992 "mibps": 27.039715346980994, 00:12:27.992 "io_failed": 0, 00:12:27.992 "io_timeout": 0, 00:12:27.992 "avg_latency_us": 18461.36504190859, 00:12:27.992 "min_latency_us": 3478.449230769231, 00:12:27.992 "max_latency_us": 16131.938461538462 00:12:27.992 } 00:12:27.992 ], 00:12:27.992 "core_count": 1 00:12:27.992 } 00:12:27.992 19:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:27.992 19:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 70695 00:12:27.992 19:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 70695 ']' 00:12:27.992 19:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 70695 00:12:27.992 19:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:27.992 19:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:27.992 19:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70695 00:12:27.992 killing process with pid 70695 00:12:27.992 Received shutdown signal, test time was about 10.000000 seconds 00:12:27.992 00:12:27.992 Latency(us) 00:12:27.992 [2024-11-26T19:46:23.239Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:27.992 [2024-11-26T19:46:23.239Z] =================================================================================================================== 00:12:27.992 [2024-11-26T19:46:23.239Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:27.992 19:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:12:27.992 19:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:12:27.992 19:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70695' 00:12:27.992 19:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 70695 00:12:27.992 19:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 70695 00:12:27.992 19:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 70663 00:12:27.992 19:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 70663 ']' 00:12:27.992 19:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 70663 00:12:27.992 19:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:27.992 19:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:27.992 19:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70663 00:12:27.992 killing process with pid 70663 00:12:27.992 19:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:27.992 19:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:27.992 19:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70663' 00:12:27.992 19:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 70663 00:12:27.992 19:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 70663 00:12:27.992 19:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:12:27.992 19:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:27.992 19:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:27.992 19:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:27.992 19:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=70834 00:12:27.992 19:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 70834 00:12:27.992 19:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 70834 ']' 00:12:27.992 19:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.992 19:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:27.992 19:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:27.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.992 19:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.992 19:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:27.992 19:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:27.992 [2024-11-26 19:46:23.232479] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:12:27.992 [2024-11-26 19:46:23.232536] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:28.251 [2024-11-26 19:46:23.374957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.251 [2024-11-26 19:46:23.410017] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:28.251 [2024-11-26 19:46:23.410161] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:28.251 [2024-11-26 19:46:23.410173] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:28.251 [2024-11-26 19:46:23.410177] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:28.251 [2024-11-26 19:46:23.410182] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:28.251 [2024-11-26 19:46:23.410433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.251 [2024-11-26 19:46:23.441213] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:28.918 19:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:28.918 19:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:28.918 19:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:28.919 19:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:28.919 19:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:28.919 19:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:28.919 19:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.ZiBJzuZ0GM 00:12:28.919 19:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ZiBJzuZ0GM 00:12:28.919 19:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:29.176 [2024-11-26 19:46:24.282863] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:29.176 19:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:29.433 19:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:12:29.692 [2024-11-26 19:46:24.682937] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:29.692 [2024-11-26 19:46:24.683112] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:29.692 19:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:29.950 malloc0 00:12:29.950 19:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:29.950 19:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ZiBJzuZ0GM 00:12:30.208 19:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:12:30.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:30.468 19:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=70890 00:12:30.468 19:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:30.468 19:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:12:30.468 19:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 70890 /var/tmp/bdevperf.sock 00:12:30.468 19:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 70890 ']' 00:12:30.468 19:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:30.468 19:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:30.468 19:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:30.468 19:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:30.468 19:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:30.468 [2024-11-26 19:46:25.580100] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:12:30.468 [2024-11-26 19:46:25.580166] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70890 ] 00:12:30.730 [2024-11-26 19:46:25.721346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.730 [2024-11-26 19:46:25.758148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:30.730 [2024-11-26 19:46:25.789458] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:31.296 19:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:31.296 19:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:31.296 19:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZiBJzuZ0GM 00:12:31.553 19:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:12:31.811 [2024-11-26 19:46:26.850917] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:31.811 nvme0n1 00:12:31.811 19:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:31.811 Running I/O for 1 seconds... 00:12:33.212 6268.00 IOPS, 24.48 MiB/s 00:12:33.212 Latency(us) 00:12:33.212 [2024-11-26T19:46:28.459Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:33.212 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:33.212 Verification LBA range: start 0x0 length 0x2000 00:12:33.212 nvme0n1 : 1.01 6337.00 24.75 0.00 0.00 20090.08 2734.87 17442.66 00:12:33.212 [2024-11-26T19:46:28.459Z] =================================================================================================================== 00:12:33.212 [2024-11-26T19:46:28.459Z] Total : 6337.00 24.75 0.00 0.00 20090.08 2734.87 17442.66 00:12:33.212 { 00:12:33.212 "results": [ 00:12:33.212 { 00:12:33.212 "job": "nvme0n1", 00:12:33.212 "core_mask": "0x2", 00:12:33.212 "workload": "verify", 00:12:33.212 "status": "finished", 00:12:33.212 "verify_range": { 00:12:33.212 "start": 0, 00:12:33.212 "length": 8192 00:12:33.212 }, 00:12:33.212 "queue_depth": 128, 00:12:33.212 "io_size": 4096, 00:12:33.212 "runtime": 1.009311, 00:12:33.212 "iops": 6336.996228119975, 00:12:33.212 "mibps": 24.753891516093653, 00:12:33.212 "io_failed": 0, 00:12:33.212 "io_timeout": 0, 00:12:33.212 "avg_latency_us": 20090.08491845865, 00:12:33.212 "min_latency_us": 2734.8676923076923, 00:12:33.212 "max_latency_us": 17442.65846153846 00:12:33.212 } 00:12:33.212 ], 00:12:33.212 "core_count": 1 00:12:33.212 } 00:12:33.212 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 70890 00:12:33.212 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 70890 ']' 00:12:33.212 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 70890 00:12:33.212 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:33.212 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:33.212 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70890 00:12:33.212 killing process with pid 70890 00:12:33.212 Received shutdown signal, test time was about 1.000000 seconds 00:12:33.212 00:12:33.212 Latency(us) 00:12:33.212 [2024-11-26T19:46:28.459Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:33.212 [2024-11-26T19:46:28.459Z] =================================================================================================================== 00:12:33.212 [2024-11-26T19:46:28.459Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:33.212 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:33.212 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:33.212 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70890' 00:12:33.212 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 70890 00:12:33.212 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 70890 00:12:33.212 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 70834 00:12:33.212 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 70834 ']' 00:12:33.212 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 70834 00:12:33.212 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:33.212 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:33.212 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70834 00:12:33.212 killing process with pid 70834 00:12:33.212 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:33.212 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:33.212 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70834' 00:12:33.212 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 70834 00:12:33.212 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 70834 00:12:33.212 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:12:33.212 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:33.212 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:33.212 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:33.212 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=70930 00:12:33.212 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 70930 00:12:33.212 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:33.212 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 70930 ']' 00:12:33.212 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.212 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:33.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.212 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.212 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:33.212 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:33.212 [2024-11-26 19:46:28.363642] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:12:33.212 [2024-11-26 19:46:28.363821] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:33.470 [2024-11-26 19:46:28.497110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.470 [2024-11-26 19:46:28.527812] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:33.470 [2024-11-26 19:46:28.527844] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:33.470 [2024-11-26 19:46:28.527849] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:33.470 [2024-11-26 19:46:28.527853] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:33.470 [2024-11-26 19:46:28.527857] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:33.470 [2024-11-26 19:46:28.528073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.470 [2024-11-26 19:46:28.557340] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:34.037 19:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:34.037 19:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:34.037 19:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:34.037 19:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:34.037 19:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:34.295 19:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:34.295 19:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:12:34.295 19:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.295 19:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:34.295 [2024-11-26 19:46:29.294338] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:34.295 malloc0 00:12:34.295 [2024-11-26 19:46:29.320159] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:34.295 [2024-11-26 19:46:29.320388] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:34.295 19:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.295 19:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=70962 00:12:34.295 19:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:12:34.295 19:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 70962 /var/tmp/bdevperf.sock 00:12:34.295 19:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 70962 ']' 00:12:34.295 19:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:34.295 19:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:34.295 19:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:34.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:34.295 19:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:34.295 19:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:34.295 [2024-11-26 19:46:29.386943] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:12:34.295 [2024-11-26 19:46:29.387150] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70962 ] 00:12:34.295 [2024-11-26 19:46:29.525064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:34.554 [2024-11-26 19:46:29.560326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:34.554 [2024-11-26 19:46:29.591841] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:35.126 19:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:35.126 19:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:35.126 19:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZiBJzuZ0GM 00:12:35.383 19:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:12:35.383 [2024-11-26 19:46:30.596265] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:35.642 nvme0n1 00:12:35.642 19:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:35.642 Running I/O for 1 seconds... 00:12:36.575 6313.00 IOPS, 24.66 MiB/s 00:12:36.575 Latency(us) 00:12:36.575 [2024-11-26T19:46:31.822Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:36.575 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:36.575 Verification LBA range: start 0x0 length 0x2000 00:12:36.575 nvme0n1 : 1.01 6381.52 24.93 0.00 0.00 19950.27 2772.68 15930.29 00:12:36.575 [2024-11-26T19:46:31.822Z] =================================================================================================================== 00:12:36.575 [2024-11-26T19:46:31.822Z] Total : 6381.52 24.93 0.00 0.00 19950.27 2772.68 15930.29 00:12:36.575 { 00:12:36.575 "results": [ 00:12:36.575 { 00:12:36.575 "job": "nvme0n1", 00:12:36.575 "core_mask": "0x2", 00:12:36.575 "workload": "verify", 00:12:36.575 "status": "finished", 00:12:36.575 "verify_range": { 00:12:36.575 "start": 0, 00:12:36.575 "length": 8192 00:12:36.575 }, 00:12:36.575 "queue_depth": 128, 00:12:36.575 "io_size": 4096, 00:12:36.575 "runtime": 1.009478, 00:12:36.575 "iops": 6381.515991433196, 00:12:36.575 "mibps": 24.927796841535923, 00:12:36.575 "io_failed": 0, 00:12:36.575 "io_timeout": 0, 00:12:36.575 "avg_latency_us": 19950.265796097723, 00:12:36.575 "min_latency_us": 2772.6769230769232, 00:12:36.575 "max_latency_us": 15930.289230769231 00:12:36.575 } 00:12:36.575 ], 00:12:36.575 "core_count": 1 00:12:36.575 } 00:12:36.575 19:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:12:36.575 19:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.575 19:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:36.833 19:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.833 19:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:12:36.833 "subsystems": [ 00:12:36.833 { 00:12:36.833 "subsystem": "keyring", 00:12:36.833 "config": [ 00:12:36.833 { 00:12:36.833 "method": "keyring_file_add_key", 00:12:36.833 "params": { 00:12:36.833 "name": "key0", 00:12:36.833 "path": "/tmp/tmp.ZiBJzuZ0GM" 00:12:36.833 } 00:12:36.833 } 00:12:36.833 ] 00:12:36.833 }, 00:12:36.833 { 00:12:36.833 "subsystem": "iobuf", 00:12:36.833 "config": [ 00:12:36.833 { 00:12:36.833 "method": "iobuf_set_options", 00:12:36.833 "params": { 00:12:36.833 "small_pool_count": 8192, 00:12:36.833 "large_pool_count": 1024, 00:12:36.833 "small_bufsize": 8192, 00:12:36.833 "large_bufsize": 135168, 00:12:36.833 "enable_numa": false 00:12:36.833 } 00:12:36.833 } 00:12:36.833 ] 00:12:36.833 }, 00:12:36.833 { 00:12:36.833 "subsystem": "sock", 00:12:36.833 "config": [ 00:12:36.833 { 00:12:36.833 "method": "sock_set_default_impl", 00:12:36.833 "params": { 00:12:36.833 "impl_name": "uring" 00:12:36.833 } 00:12:36.833 }, 00:12:36.833 { 00:12:36.833 "method": "sock_impl_set_options", 00:12:36.833 "params": { 00:12:36.833 "impl_name": "ssl", 00:12:36.833 "recv_buf_size": 4096, 00:12:36.833 "send_buf_size": 4096, 00:12:36.833 "enable_recv_pipe": true, 00:12:36.833 "enable_quickack": false, 00:12:36.833 "enable_placement_id": 0, 00:12:36.833 "enable_zerocopy_send_server": true, 00:12:36.833 "enable_zerocopy_send_client": false, 00:12:36.833 "zerocopy_threshold": 0, 00:12:36.833 "tls_version": 0, 00:12:36.833 "enable_ktls": false 00:12:36.833 } 00:12:36.833 }, 00:12:36.833 { 00:12:36.833 "method": "sock_impl_set_options", 00:12:36.833 "params": { 00:12:36.833 "impl_name": "posix", 00:12:36.833 "recv_buf_size": 2097152, 00:12:36.833 "send_buf_size": 2097152, 00:12:36.833 "enable_recv_pipe": true, 00:12:36.833 "enable_quickack": false, 00:12:36.833 "enable_placement_id": 0, 00:12:36.833 "enable_zerocopy_send_server": true, 00:12:36.833 "enable_zerocopy_send_client": false, 00:12:36.833 "zerocopy_threshold": 0, 00:12:36.833 "tls_version": 0, 00:12:36.833 "enable_ktls": false 00:12:36.833 } 00:12:36.833 }, 00:12:36.833 { 00:12:36.833 "method": "sock_impl_set_options", 00:12:36.833 "params": { 00:12:36.833 "impl_name": "uring", 00:12:36.833 "recv_buf_size": 2097152, 00:12:36.833 "send_buf_size": 2097152, 00:12:36.833 "enable_recv_pipe": true, 00:12:36.833 "enable_quickack": false, 00:12:36.833 "enable_placement_id": 0, 00:12:36.833 "enable_zerocopy_send_server": false, 00:12:36.833 "enable_zerocopy_send_client": false, 00:12:36.833 "zerocopy_threshold": 0, 00:12:36.833 "tls_version": 0, 00:12:36.833 "enable_ktls": false 00:12:36.833 } 00:12:36.833 } 00:12:36.834 ] 00:12:36.834 }, 00:12:36.834 { 00:12:36.834 "subsystem": "vmd", 00:12:36.834 "config": [] 00:12:36.834 }, 00:12:36.834 { 00:12:36.834 "subsystem": "accel", 00:12:36.834 "config": [ 00:12:36.834 { 00:12:36.834 "method": "accel_set_options", 00:12:36.834 "params": { 00:12:36.834 "small_cache_size": 128, 00:12:36.834 "large_cache_size": 16, 00:12:36.834 "task_count": 2048, 00:12:36.834 "sequence_count": 2048, 00:12:36.834 "buf_count": 2048 00:12:36.834 } 00:12:36.834 } 00:12:36.834 ] 00:12:36.834 }, 00:12:36.834 { 00:12:36.834 "subsystem": "bdev", 00:12:36.834 "config": [ 00:12:36.834 { 00:12:36.834 "method": "bdev_set_options", 00:12:36.834 "params": { 00:12:36.834 "bdev_io_pool_size": 65535, 00:12:36.834 "bdev_io_cache_size": 256, 00:12:36.834 "bdev_auto_examine": true, 00:12:36.834 "iobuf_small_cache_size": 128, 00:12:36.834 "iobuf_large_cache_size": 16 00:12:36.834 } 00:12:36.834 }, 00:12:36.834 { 00:12:36.834 "method": "bdev_raid_set_options", 00:12:36.834 "params": { 00:12:36.834 "process_window_size_kb": 1024, 00:12:36.834 "process_max_bandwidth_mb_sec": 0 00:12:36.834 } 00:12:36.834 }, 00:12:36.834 { 00:12:36.834 "method": "bdev_iscsi_set_options", 00:12:36.834 "params": { 00:12:36.834 "timeout_sec": 30 00:12:36.834 } 00:12:36.834 }, 00:12:36.834 { 00:12:36.834 "method": "bdev_nvme_set_options", 00:12:36.834 "params": { 00:12:36.834 "action_on_timeout": "none", 00:12:36.834 "timeout_us": 0, 00:12:36.834 "timeout_admin_us": 0, 00:12:36.834 "keep_alive_timeout_ms": 10000, 00:12:36.834 "arbitration_burst": 0, 00:12:36.834 "low_priority_weight": 0, 00:12:36.834 "medium_priority_weight": 0, 00:12:36.834 "high_priority_weight": 0, 00:12:36.834 "nvme_adminq_poll_period_us": 10000, 00:12:36.834 "nvme_ioq_poll_period_us": 0, 00:12:36.834 "io_queue_requests": 0, 00:12:36.834 "delay_cmd_submit": true, 00:12:36.834 "transport_retry_count": 4, 00:12:36.834 "bdev_retry_count": 3, 00:12:36.834 "transport_ack_timeout": 0, 00:12:36.834 "ctrlr_loss_timeout_sec": 0, 00:12:36.834 "reconnect_delay_sec": 0, 00:12:36.834 "fast_io_fail_timeout_sec": 0, 00:12:36.834 "disable_auto_failback": false, 00:12:36.834 "generate_uuids": false, 00:12:36.834 "transport_tos": 0, 00:12:36.834 "nvme_error_stat": false, 00:12:36.834 "rdma_srq_size": 0, 00:12:36.834 "io_path_stat": false, 00:12:36.834 "allow_accel_sequence": false, 00:12:36.834 "rdma_max_cq_size": 0, 00:12:36.834 "rdma_cm_event_timeout_ms": 0, 00:12:36.834 "dhchap_digests": [ 00:12:36.834 "sha256", 00:12:36.834 "sha384", 00:12:36.834 "sha512" 00:12:36.834 ], 00:12:36.834 "dhchap_dhgroups": [ 00:12:36.834 "null", 00:12:36.834 "ffdhe2048", 00:12:36.834 "ffdhe3072", 00:12:36.834 "ffdhe4096", 00:12:36.834 "ffdhe6144", 00:12:36.834 "ffdhe8192" 00:12:36.834 ] 00:12:36.834 } 00:12:36.834 }, 00:12:36.834 { 00:12:36.834 "method": "bdev_nvme_set_hotplug", 00:12:36.834 "params": { 00:12:36.834 "period_us": 100000, 00:12:36.834 "enable": false 00:12:36.834 } 00:12:36.834 }, 00:12:36.834 { 00:12:36.834 "method": "bdev_malloc_create", 00:12:36.834 "params": { 00:12:36.834 "name": "malloc0", 00:12:36.834 "num_blocks": 8192, 00:12:36.834 "block_size": 4096, 00:12:36.834 "physical_block_size": 4096, 00:12:36.834 "uuid": "56932b60-e075-4517-9fbe-7efe451ecc25", 00:12:36.834 "optimal_io_boundary": 0, 00:12:36.834 "md_size": 0, 00:12:36.834 "dif_type": 0, 00:12:36.834 "dif_is_head_of_md": false, 00:12:36.834 "dif_pi_format": 0 00:12:36.834 } 00:12:36.834 }, 00:12:36.834 { 00:12:36.834 "method": "bdev_wait_for_examine" 00:12:36.834 } 00:12:36.834 ] 00:12:36.834 }, 00:12:36.834 { 00:12:36.834 "subsystem": "nbd", 00:12:36.834 "config": [] 00:12:36.834 }, 00:12:36.834 { 00:12:36.834 "subsystem": "scheduler", 00:12:36.834 "config": [ 00:12:36.834 { 00:12:36.834 "method": "framework_set_scheduler", 00:12:36.834 "params": { 00:12:36.834 "name": "static" 00:12:36.834 } 00:12:36.834 } 00:12:36.834 ] 00:12:36.834 }, 00:12:36.834 { 00:12:36.834 "subsystem": "nvmf", 00:12:36.834 "config": [ 00:12:36.834 { 00:12:36.834 "method": "nvmf_set_config", 00:12:36.834 "params": { 00:12:36.834 "discovery_filter": "match_any", 00:12:36.834 "admin_cmd_passthru": { 00:12:36.834 "identify_ctrlr": false 00:12:36.834 }, 00:12:36.834 "dhchap_digests": [ 00:12:36.834 "sha256", 00:12:36.834 "sha384", 00:12:36.834 "sha512" 00:12:36.834 ], 00:12:36.834 "dhchap_dhgroups": [ 00:12:36.834 "null", 00:12:36.834 "ffdhe2048", 00:12:36.834 "ffdhe3072", 00:12:36.834 "ffdhe4096", 00:12:36.834 "ffdhe6144", 00:12:36.834 "ffdhe8192" 00:12:36.834 ] 00:12:36.834 } 00:12:36.834 }, 00:12:36.834 { 00:12:36.834 "method": "nvmf_set_max_subsystems", 00:12:36.834 "params": { 00:12:36.834 "max_subsystems": 1024 00:12:36.834 } 00:12:36.834 }, 00:12:36.834 { 00:12:36.834 "method": "nvmf_set_crdt", 00:12:36.834 "params": { 00:12:36.834 "crdt1": 0, 00:12:36.834 "crdt2": 0, 00:12:36.834 "crdt3": 0 00:12:36.834 } 00:12:36.834 }, 00:12:36.834 { 00:12:36.834 "method": "nvmf_create_transport", 00:12:36.834 "params": { 00:12:36.834 "trtype": "TCP", 00:12:36.834 "max_queue_depth": 128, 00:12:36.834 "max_io_qpairs_per_ctrlr": 127, 00:12:36.834 "in_capsule_data_size": 4096, 00:12:36.834 "max_io_size": 131072, 00:12:36.834 "io_unit_size": 131072, 00:12:36.834 "max_aq_depth": 128, 00:12:36.834 "num_shared_buffers": 511, 00:12:36.834 "buf_cache_size": 4294967295, 00:12:36.834 "dif_insert_or_strip": false, 00:12:36.834 "zcopy": false, 00:12:36.834 "c2h_success": false, 00:12:36.834 "sock_priority": 0, 00:12:36.834 "abort_timeout_sec": 1, 00:12:36.834 "ack_timeout": 0, 00:12:36.834 "data_wr_pool_size": 0 00:12:36.834 } 00:12:36.834 }, 00:12:36.834 { 00:12:36.834 "method": "nvmf_create_subsystem", 00:12:36.834 "params": { 00:12:36.834 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:36.834 "allow_any_host": false, 00:12:36.834 "serial_number": "00000000000000000000", 00:12:36.834 "model_number": "SPDK bdev Controller", 00:12:36.834 "max_namespaces": 32, 00:12:36.834 "min_cntlid": 1, 00:12:36.834 "max_cntlid": 65519, 00:12:36.834 "ana_reporting": false 00:12:36.834 } 00:12:36.834 }, 00:12:36.834 { 00:12:36.834 "method": "nvmf_subsystem_add_host", 00:12:36.834 "params": { 00:12:36.834 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:36.834 "host": "nqn.2016-06.io.spdk:host1", 00:12:36.834 "psk": "key0" 00:12:36.834 } 00:12:36.834 }, 00:12:36.834 { 00:12:36.834 "method": "nvmf_subsystem_add_ns", 00:12:36.834 "params": { 00:12:36.834 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:36.834 "namespace": { 00:12:36.834 "nsid": 1, 00:12:36.834 "bdev_name": "malloc0", 00:12:36.834 "nguid": "56932B60E07545179FBE7EFE451ECC25", 00:12:36.834 "uuid": "56932b60-e075-4517-9fbe-7efe451ecc25", 00:12:36.834 "no_auto_visible": false 00:12:36.834 } 00:12:36.834 } 00:12:36.834 }, 00:12:36.834 { 00:12:36.834 "method": "nvmf_subsystem_add_listener", 00:12:36.834 "params": { 00:12:36.834 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:36.834 "listen_address": { 00:12:36.834 "trtype": "TCP", 00:12:36.834 "adrfam": "IPv4", 00:12:36.834 "traddr": "10.0.0.3", 00:12:36.834 "trsvcid": "4420" 00:12:36.834 }, 00:12:36.834 "secure_channel": false, 00:12:36.834 "sock_impl": "ssl" 00:12:36.834 } 00:12:36.834 } 00:12:36.834 ] 00:12:36.834 } 00:12:36.834 ] 00:12:36.834 }' 00:12:36.834 19:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:12:37.092 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:12:37.092 "subsystems": [ 00:12:37.092 { 00:12:37.092 "subsystem": "keyring", 00:12:37.093 "config": [ 00:12:37.093 { 00:12:37.093 "method": "keyring_file_add_key", 00:12:37.093 "params": { 00:12:37.093 "name": "key0", 00:12:37.093 "path": "/tmp/tmp.ZiBJzuZ0GM" 00:12:37.093 } 00:12:37.093 } 00:12:37.093 ] 00:12:37.093 }, 00:12:37.093 { 00:12:37.093 "subsystem": "iobuf", 00:12:37.093 "config": [ 00:12:37.093 { 00:12:37.093 "method": "iobuf_set_options", 00:12:37.093 "params": { 00:12:37.093 "small_pool_count": 8192, 00:12:37.093 "large_pool_count": 1024, 00:12:37.093 "small_bufsize": 8192, 00:12:37.093 "large_bufsize": 135168, 00:12:37.093 "enable_numa": false 00:12:37.093 } 00:12:37.093 } 00:12:37.093 ] 00:12:37.093 }, 00:12:37.093 { 00:12:37.093 "subsystem": "sock", 00:12:37.093 "config": [ 00:12:37.093 { 00:12:37.093 "method": "sock_set_default_impl", 00:12:37.093 "params": { 00:12:37.093 "impl_name": "uring" 00:12:37.093 } 00:12:37.093 }, 00:12:37.093 { 00:12:37.093 "method": "sock_impl_set_options", 00:12:37.093 "params": { 00:12:37.093 "impl_name": "ssl", 00:12:37.093 "recv_buf_size": 4096, 00:12:37.093 "send_buf_size": 4096, 00:12:37.093 "enable_recv_pipe": true, 00:12:37.093 "enable_quickack": false, 00:12:37.093 "enable_placement_id": 0, 00:12:37.093 "enable_zerocopy_send_server": true, 00:12:37.093 "enable_zerocopy_send_client": false, 00:12:37.093 "zerocopy_threshold": 0, 00:12:37.093 "tls_version": 0, 00:12:37.093 "enable_ktls": false 00:12:37.093 } 00:12:37.093 }, 00:12:37.093 { 00:12:37.093 "method": "sock_impl_set_options", 00:12:37.093 "params": { 00:12:37.093 "impl_name": "posix", 00:12:37.093 "recv_buf_size": 2097152, 00:12:37.093 "send_buf_size": 2097152, 00:12:37.093 "enable_recv_pipe": true, 00:12:37.093 "enable_quickack": false, 00:12:37.093 "enable_placement_id": 0, 00:12:37.093 "enable_zerocopy_send_server": true, 00:12:37.093 "enable_zerocopy_send_client": false, 00:12:37.093 "zerocopy_threshold": 0, 00:12:37.093 "tls_version": 0, 00:12:37.093 "enable_ktls": false 00:12:37.093 } 00:12:37.093 }, 00:12:37.093 { 00:12:37.093 "method": "sock_impl_set_options", 00:12:37.093 "params": { 00:12:37.093 "impl_name": "uring", 00:12:37.093 "recv_buf_size": 2097152, 00:12:37.093 "send_buf_size": 2097152, 00:12:37.093 "enable_recv_pipe": true, 00:12:37.093 "enable_quickack": false, 00:12:37.093 "enable_placement_id": 0, 00:12:37.093 "enable_zerocopy_send_server": false, 00:12:37.093 "enable_zerocopy_send_client": false, 00:12:37.093 "zerocopy_threshold": 0, 00:12:37.093 "tls_version": 0, 00:12:37.093 "enable_ktls": false 00:12:37.093 } 00:12:37.093 } 00:12:37.093 ] 00:12:37.093 }, 00:12:37.093 { 00:12:37.093 "subsystem": "vmd", 00:12:37.093 "config": [] 00:12:37.093 }, 00:12:37.093 { 00:12:37.093 "subsystem": "accel", 00:12:37.093 "config": [ 00:12:37.093 { 00:12:37.093 "method": "accel_set_options", 00:12:37.093 "params": { 00:12:37.093 "small_cache_size": 128, 00:12:37.093 "large_cache_size": 16, 00:12:37.093 "task_count": 2048, 00:12:37.093 "sequence_count": 2048, 00:12:37.093 "buf_count": 2048 00:12:37.093 } 00:12:37.093 } 00:12:37.093 ] 00:12:37.093 }, 00:12:37.093 { 00:12:37.093 "subsystem": "bdev", 00:12:37.093 "config": [ 00:12:37.093 { 00:12:37.093 "method": "bdev_set_options", 00:12:37.093 "params": { 00:12:37.093 "bdev_io_pool_size": 65535, 00:12:37.093 "bdev_io_cache_size": 256, 00:12:37.093 "bdev_auto_examine": true, 00:12:37.093 "iobuf_small_cache_size": 128, 00:12:37.093 "iobuf_large_cache_size": 16 00:12:37.093 } 00:12:37.093 }, 00:12:37.093 { 00:12:37.093 "method": "bdev_raid_set_options", 00:12:37.093 "params": { 00:12:37.093 "process_window_size_kb": 1024, 00:12:37.093 "process_max_bandwidth_mb_sec": 0 00:12:37.093 } 00:12:37.093 }, 00:12:37.093 { 00:12:37.093 "method": "bdev_iscsi_set_options", 00:12:37.093 "params": { 00:12:37.093 "timeout_sec": 30 00:12:37.093 } 00:12:37.093 }, 00:12:37.093 { 00:12:37.093 "method": "bdev_nvme_set_options", 00:12:37.093 "params": { 00:12:37.093 "action_on_timeout": "none", 00:12:37.093 "timeout_us": 0, 00:12:37.093 "timeout_admin_us": 0, 00:12:37.093 "keep_alive_timeout_ms": 10000, 00:12:37.093 "arbitration_burst": 0, 00:12:37.093 "low_priority_weight": 0, 00:12:37.093 "medium_priority_weight": 0, 00:12:37.093 "high_priority_weight": 0, 00:12:37.093 "nvme_adminq_poll_period_us": 10000, 00:12:37.093 "nvme_ioq_poll_period_us": 0, 00:12:37.093 "io_queue_requests": 512, 00:12:37.093 "delay_cmd_submit": true, 00:12:37.093 "transport_retry_count": 4, 00:12:37.093 "bdev_retry_count": 3, 00:12:37.093 "transport_ack_timeout": 0, 00:12:37.093 "ctrlr_loss_timeout_sec": 0, 00:12:37.093 "reconnect_delay_sec": 0, 00:12:37.093 "fast_io_fail_timeout_sec": 0, 00:12:37.093 "disable_auto_failback": false, 00:12:37.093 "generate_uuids": false, 00:12:37.093 "transport_tos": 0, 00:12:37.093 "nvme_error_stat": false, 00:12:37.093 "rdma_srq_size": 0, 00:12:37.093 "io_path_stat": false, 00:12:37.093 "allow_accel_sequence": false, 00:12:37.093 "rdma_max_cq_size": 0, 00:12:37.093 "rdma_cm_event_timeout_ms": 0, 00:12:37.093 "dhchap_digests": [ 00:12:37.093 "sha256", 00:12:37.093 "sha384", 00:12:37.093 "sha512" 00:12:37.093 ], 00:12:37.093 "dhchap_dhgroups": [ 00:12:37.093 "null", 00:12:37.093 "ffdhe2048", 00:12:37.093 "ffdhe3072", 00:12:37.093 "ffdhe4096", 00:12:37.093 "ffdhe6144", 00:12:37.093 "ffdhe8192" 00:12:37.093 ] 00:12:37.093 } 00:12:37.093 }, 00:12:37.093 { 00:12:37.093 "method": "bdev_nvme_attach_controller", 00:12:37.093 "params": { 00:12:37.093 "name": "nvme0", 00:12:37.093 "trtype": "TCP", 00:12:37.093 "adrfam": "IPv4", 00:12:37.093 "traddr": "10.0.0.3", 00:12:37.093 "trsvcid": "4420", 00:12:37.093 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:37.093 "prchk_reftag": false, 00:12:37.093 "prchk_guard": false, 00:12:37.093 "ctrlr_loss_timeout_sec": 0, 00:12:37.093 "reconnect_delay_sec": 0, 00:12:37.093 "fast_io_fail_timeout_sec": 0, 00:12:37.093 "psk": "key0", 00:12:37.093 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:37.093 "hdgst": false, 00:12:37.093 "ddgst": false, 00:12:37.093 "multipath": "multipath" 00:12:37.093 } 00:12:37.093 }, 00:12:37.093 { 00:12:37.093 "method": "bdev_nvme_set_hotplug", 00:12:37.094 "params": { 00:12:37.094 "period_us": 100000, 00:12:37.094 "enable": false 00:12:37.094 } 00:12:37.094 }, 00:12:37.094 { 00:12:37.094 "method": "bdev_enable_histogram", 00:12:37.094 "params": { 00:12:37.094 "name": "nvme0n1", 00:12:37.094 "enable": true 00:12:37.094 } 00:12:37.094 }, 00:12:37.094 { 00:12:37.094 "method": "bdev_wait_for_examine" 00:12:37.094 } 00:12:37.094 ] 00:12:37.094 }, 00:12:37.094 { 00:12:37.094 "subsystem": "nbd", 00:12:37.094 "config": [] 00:12:37.094 } 00:12:37.094 ] 00:12:37.094 }' 00:12:37.094 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 70962 00:12:37.094 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 70962 ']' 00:12:37.094 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 70962 00:12:37.094 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:37.094 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:37.094 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70962 00:12:37.094 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:37.094 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:37.094 killing process with pid 70962 00:12:37.094 Received shutdown signal, test time was about 1.000000 seconds 00:12:37.094 00:12:37.094 Latency(us) 00:12:37.094 [2024-11-26T19:46:32.341Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:37.094 [2024-11-26T19:46:32.341Z] =================================================================================================================== 00:12:37.094 [2024-11-26T19:46:32.341Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:37.094 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70962' 00:12:37.094 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 70962 00:12:37.094 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 70962 00:12:37.094 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 70930 00:12:37.094 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 70930 ']' 00:12:37.094 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 70930 00:12:37.094 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:37.094 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:37.094 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70930 00:12:37.094 killing process with pid 70930 00:12:37.094 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:37.094 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:37.094 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70930' 00:12:37.094 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 70930 00:12:37.094 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 70930 00:12:37.351 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:12:37.351 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:37.351 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:37.351 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:12:37.351 "subsystems": [ 00:12:37.351 { 00:12:37.351 "subsystem": "keyring", 00:12:37.351 "config": [ 00:12:37.351 { 00:12:37.351 "method": "keyring_file_add_key", 00:12:37.351 "params": { 00:12:37.351 "name": "key0", 00:12:37.351 "path": "/tmp/tmp.ZiBJzuZ0GM" 00:12:37.351 } 00:12:37.351 } 00:12:37.351 ] 00:12:37.351 }, 00:12:37.351 { 00:12:37.351 "subsystem": "iobuf", 00:12:37.351 "config": [ 00:12:37.351 { 00:12:37.351 "method": "iobuf_set_options", 00:12:37.351 "params": { 00:12:37.351 "small_pool_count": 8192, 00:12:37.351 "large_pool_count": 1024, 00:12:37.351 "small_bufsize": 8192, 00:12:37.351 "large_bufsize": 135168, 00:12:37.351 "enable_numa": false 00:12:37.351 } 00:12:37.351 } 00:12:37.351 ] 00:12:37.351 }, 00:12:37.351 { 00:12:37.351 "subsystem": "sock", 00:12:37.351 "config": [ 00:12:37.351 { 00:12:37.351 "method": "sock_set_default_impl", 00:12:37.351 "params": { 00:12:37.351 "impl_name": "uring" 00:12:37.351 } 00:12:37.351 }, 00:12:37.351 { 00:12:37.351 "method": "sock_impl_set_options", 00:12:37.351 "params": { 00:12:37.351 "impl_name": "ssl", 00:12:37.351 "recv_buf_size": 4096, 00:12:37.351 "send_buf_size": 4096, 00:12:37.351 "enable_recv_pipe": true, 00:12:37.351 "enable_quickack": false, 00:12:37.351 "enable_placement_id": 0, 00:12:37.351 "enable_zerocopy_send_server": true, 00:12:37.351 "enable_zerocopy_send_client": false, 00:12:37.351 "zerocopy_threshold": 0, 00:12:37.351 "tls_version": 0, 00:12:37.351 "enable_ktls": false 00:12:37.351 } 00:12:37.351 }, 00:12:37.351 { 00:12:37.351 "method": "sock_impl_set_options", 00:12:37.351 "params": { 00:12:37.351 "impl_name": "posix", 00:12:37.351 "recv_buf_size": 2097152, 00:12:37.351 "send_buf_size": 2097152, 00:12:37.351 "enable_recv_pipe": true, 00:12:37.351 "enable_quickack": false, 00:12:37.351 "enable_placement_id": 0, 00:12:37.351 "enable_zerocopy_send_server": true, 00:12:37.351 "enable_zerocopy_send_client": false, 00:12:37.351 "zerocopy_threshold": 0, 00:12:37.351 "tls_version": 0, 00:12:37.351 "enable_ktls": false 00:12:37.351 } 00:12:37.351 }, 00:12:37.351 { 00:12:37.351 "method": "sock_impl_set_options", 00:12:37.351 "params": { 00:12:37.351 "impl_name": "uring", 00:12:37.351 "recv_buf_size": 2097152, 00:12:37.351 "send_buf_size": 2097152, 00:12:37.351 "enable_recv_pipe": true, 00:12:37.351 "enable_quickack": false, 00:12:37.351 "enable_placement_id": 0, 00:12:37.351 "enable_zerocopy_send_server": false, 00:12:37.351 "enable_zerocopy_send_client": false, 00:12:37.351 "zerocopy_threshold": 0, 00:12:37.351 "tls_version": 0, 00:12:37.351 "enable_ktls": false 00:12:37.351 } 00:12:37.351 } 00:12:37.351 ] 00:12:37.351 }, 00:12:37.351 { 00:12:37.351 "subsystem": "vmd", 00:12:37.351 "config": [] 00:12:37.351 }, 00:12:37.351 { 00:12:37.351 "subsystem": "accel", 00:12:37.351 "config": [ 00:12:37.351 { 00:12:37.351 "method": "accel_set_options", 00:12:37.351 "params": { 00:12:37.351 "small_cache_size": 128, 00:12:37.351 "large_cache_size": 16, 00:12:37.351 "task_count": 2048, 00:12:37.351 "sequence_count": 2048, 00:12:37.351 "buf_count": 2048 00:12:37.351 } 00:12:37.351 } 00:12:37.351 ] 00:12:37.351 }, 00:12:37.351 { 00:12:37.351 "subsystem": "bdev", 00:12:37.351 "config": [ 00:12:37.351 { 00:12:37.351 "method": "bdev_set_options", 00:12:37.351 "params": { 00:12:37.351 "bdev_io_pool_size": 65535, 00:12:37.351 "bdev_io_cache_size": 256, 00:12:37.351 "bdev_auto_examine": true, 00:12:37.351 "iobuf_small_cache_size": 128, 00:12:37.351 "iobuf_large_cache_size": 16 00:12:37.351 } 00:12:37.351 }, 00:12:37.351 { 00:12:37.351 "method": "bdev_raid_set_options", 00:12:37.351 "params": { 00:12:37.351 "process_window_size_kb": 1024, 00:12:37.351 "process_max_bandwidth_mb_sec": 0 00:12:37.351 } 00:12:37.351 }, 00:12:37.351 { 00:12:37.351 "method": "bdev_iscsi_set_options", 00:12:37.351 "params": { 00:12:37.351 "timeout_sec": 30 00:12:37.351 } 00:12:37.351 }, 00:12:37.351 { 00:12:37.351 "method": "bdev_nvme_set_options", 00:12:37.351 "params": { 00:12:37.351 "action_on_timeout": "none", 00:12:37.351 "timeout_us": 0, 00:12:37.351 "timeout_admin_us": 0, 00:12:37.351 "keep_alive_timeout_ms": 10000, 00:12:37.351 "arbitration_burst": 0, 00:12:37.351 "low_priority_weight": 0, 00:12:37.351 "medium_priority_weight": 0, 00:12:37.351 "high_priority_weight": 0, 00:12:37.351 "nvme_adminq_poll_period_us": 10000, 00:12:37.351 "nvme_ioq_poll_period_us": 0, 00:12:37.351 "io_queue_requests": 0, 00:12:37.351 "delay_cmd_submit": true, 00:12:37.351 "transport_retry_count": 4, 00:12:37.351 "bdev_retry_count": 3, 00:12:37.351 "transport_ack_timeout": 0, 00:12:37.351 "ctrlr_loss_timeout_sec": 0, 00:12:37.351 "reconnect_delay_sec": 0, 00:12:37.351 "fast_io_fail_timeout_sec": 0, 00:12:37.351 "disable_auto_failback": false, 00:12:37.351 "generate_uuids": false, 00:12:37.351 "transport_tos": 0, 00:12:37.352 "nvme_error_stat": false, 00:12:37.352 "rdma_srq_size": 0, 00:12:37.352 "io_path_stat": false, 00:12:37.352 "allow_accel_sequence": false, 00:12:37.352 "rdma_max_cq_size": 0, 00:12:37.352 "rdma_cm_event_timeout_ms": 0, 00:12:37.352 "dhchap_digests": [ 00:12:37.352 "sha256", 00:12:37.352 "sha384", 00:12:37.352 "sha512" 00:12:37.352 ], 00:12:37.352 "dhchap_dhgroups": [ 00:12:37.352 "null", 00:12:37.352 "ffdhe2048", 00:12:37.352 "ffdhe3072", 00:12:37.352 "ffdhe4096", 00:12:37.352 "ffdhe6144", 00:12:37.352 "ffdhe8192" 00:12:37.352 ] 00:12:37.352 } 00:12:37.352 }, 00:12:37.352 { 00:12:37.352 "method": "bdev_nvme_set_hotplug", 00:12:37.352 "params": { 00:12:37.352 "period_us": 100000, 00:12:37.352 "enable": false 00:12:37.352 } 00:12:37.352 }, 00:12:37.352 { 00:12:37.352 "method": "bdev_malloc_create", 00:12:37.352 "params": { 00:12:37.352 "name": "malloc0", 00:12:37.352 "num_blocks": 8192, 00:12:37.352 "block_size": 4096, 00:12:37.352 "physical_block_size": 4096, 00:12:37.352 "uuid": "56932b60-e075-4517-9fbe-7efe451ecc25", 00:12:37.352 "optimal_io_boundary": 0, 00:12:37.352 "md_size": 0, 00:12:37.352 "dif_type": 0, 00:12:37.352 "dif_is_head_of_md": false, 00:12:37.352 "dif_pi_format": 0 00:12:37.352 } 00:12:37.352 }, 00:12:37.352 { 00:12:37.352 "method": "bdev_wait_for_examine" 00:12:37.352 } 00:12:37.352 ] 00:12:37.352 }, 00:12:37.352 { 00:12:37.352 "subsystem": "nbd", 00:12:37.352 "config": [] 00:12:37.352 }, 00:12:37.352 { 00:12:37.352 "subsystem": "scheduler", 00:12:37.352 "config": [ 00:12:37.352 { 00:12:37.352 "method": "framework_set_scheduler", 00:12:37.352 "params": { 00:12:37.352 "name": "static" 00:12:37.352 } 00:12:37.352 } 00:12:37.352 ] 00:12:37.352 }, 00:12:37.352 { 00:12:37.352 "subsystem": "nvmf", 00:12:37.352 "config": [ 00:12:37.352 { 00:12:37.352 "method": "nvmf_set_config", 00:12:37.352 "params": { 00:12:37.352 "discovery_filter": "match_any", 00:12:37.352 "admin_cmd_passthru": { 00:12:37.352 "identify_ctrlr": false 00:12:37.352 }, 00:12:37.352 "dhchap_digests": [ 00:12:37.352 "sha256", 00:12:37.352 "sha384", 00:12:37.352 "sha512" 00:12:37.352 ], 00:12:37.352 "dhchap_dhgroups": [ 00:12:37.352 "null", 00:12:37.352 "ffdhe2048", 00:12:37.352 "ffdhe3072", 00:12:37.352 "ffdhe4096", 00:12:37.352 "ffdhe6144", 00:12:37.352 "ffdhe8192" 00:12:37.352 ] 00:12:37.352 } 00:12:37.352 }, 00:12:37.352 { 00:12:37.352 "method": "nvmf_set_max_subsystems", 00:12:37.352 "params": { 00:12:37.352 "max_subsystems": 1024 00:12:37.352 } 00:12:37.352 }, 00:12:37.352 { 00:12:37.352 "method": "nvmf_set_crdt", 00:12:37.352 "params": { 00:12:37.352 "crdt1": 0, 00:12:37.352 "crdt2": 0, 00:12:37.352 "crdt3": 0 00:12:37.352 } 00:12:37.352 }, 00:12:37.352 { 00:12:37.352 "method": "nvmf_create_transport", 00:12:37.352 "params": { 00:12:37.352 "trtype": "TCP", 00:12:37.352 "max_queue_depth": 128, 00:12:37.352 "max_io_qpairs_per_ctrlr": 127, 00:12:37.352 "in_capsule_data_size": 4096, 00:12:37.352 "max_io_size": 131072, 00:12:37.352 "io_unit_size": 131072, 00:12:37.352 "max_aq_depth": 128, 00:12:37.352 "num_shared_buffers": 511, 00:12:37.352 "buf_cache_size": 4294967295, 00:12:37.352 "dif_insert_or_strip": false, 00:12:37.352 "zcopy": false, 00:12:37.352 "c2h_success": false, 00:12:37.352 "sock_priority": 0, 00:12:37.352 "abort_timeout_sec": 1, 00:12:37.352 "ack_timeout": 0, 00:12:37.352 "data_wr_pool_size": 0 00:12:37.352 } 00:12:37.352 }, 00:12:37.352 { 00:12:37.352 "method": "nvmf_create_subsystem", 00:12:37.352 "params": { 00:12:37.352 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:37.352 "allow_any_host": false, 00:12:37.352 "serial_number": "00000000000000000000", 00:12:37.352 "model_number": "SPDK bdev Controller", 00:12:37.352 "max_namespaces": 32, 00:12:37.352 "min_cntlid": 1, 00:12:37.352 "max_cntlid": 65519, 00:12:37.352 "ana_reporting": false 00:12:37.352 } 00:12:37.352 }, 00:12:37.352 { 00:12:37.352 "method": "nvmf_subsystem_add_host", 00:12:37.352 "params": { 00:12:37.352 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:37.352 "host": "nqn.2016-06.io.spdk:host1", 00:12:37.352 "psk": "key0" 00:12:37.352 } 00:12:37.352 }, 00:12:37.352 { 00:12:37.352 "method": "nvmf_subsystem_add_ns", 00:12:37.352 "params": { 00:12:37.352 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:37.352 "namespace": { 00:12:37.352 "nsid": 1, 00:12:37.352 "bdev_name": "malloc0", 00:12:37.352 "nguid": "56932B60E07545179FBE7EFE451ECC25", 00:12:37.352 "uuid": "56932b60-e075-4517-9fbe-7efe451ecc25", 00:12:37.352 "no_auto_visible": false 00:12:37.352 } 00:12:37.352 } 00:12:37.352 }, 00:12:37.352 { 00:12:37.352 "method": "nvmf_subsystem_add_listener", 00:12:37.352 "params": { 00:12:37.352 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:37.352 "listen_address": { 00:12:37.352 "trtype": "TCP", 00:12:37.352 "adrfam": "IPv4", 00:12:37.352 "traddr": "10.0.0.3", 00:12:37.352 "trsvcid": "4420" 00:12:37.352 }, 00:12:37.352 "secure_channel": false, 00:12:37.352 "sock_impl": "ssl" 00:12:37.352 } 00:12:37.352 } 00:12:37.352 ] 00:12:37.352 } 00:12:37.352 ] 00:12:37.352 }' 00:12:37.352 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:37.352 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71017 00:12:37.352 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71017 00:12:37.352 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:12:37.352 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71017 ']' 00:12:37.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.352 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.352 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:37.352 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.352 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:37.352 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:37.352 [2024-11-26 19:46:32.462402] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:12:37.352 [2024-11-26 19:46:32.462596] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:37.646 [2024-11-26 19:46:32.599644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.646 [2024-11-26 19:46:32.630663] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:37.646 [2024-11-26 19:46:32.630701] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:37.646 [2024-11-26 19:46:32.630706] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:37.646 [2024-11-26 19:46:32.630710] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:37.646 [2024-11-26 19:46:32.630714] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:37.646 [2024-11-26 19:46:32.630972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.646 [2024-11-26 19:46:32.772387] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:37.646 [2024-11-26 19:46:32.833415] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:37.646 [2024-11-26 19:46:32.865367] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:37.646 [2024-11-26 19:46:32.865497] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:38.211 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:38.211 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:38.211 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:38.211 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:38.211 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:38.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:38.211 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:38.211 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=71049 00:12:38.211 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 71049 /var/tmp/bdevperf.sock 00:12:38.211 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71049 ']' 00:12:38.211 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:38.211 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:38.211 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:38.211 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:38.211 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:12:38.211 "subsystems": [ 00:12:38.211 { 00:12:38.211 "subsystem": "keyring", 00:12:38.211 "config": [ 00:12:38.211 { 00:12:38.211 "method": "keyring_file_add_key", 00:12:38.211 "params": { 00:12:38.211 "name": "key0", 00:12:38.211 "path": "/tmp/tmp.ZiBJzuZ0GM" 00:12:38.211 } 00:12:38.211 } 00:12:38.211 ] 00:12:38.211 }, 00:12:38.211 { 00:12:38.211 "subsystem": "iobuf", 00:12:38.211 "config": [ 00:12:38.211 { 00:12:38.211 "method": "iobuf_set_options", 00:12:38.211 "params": { 00:12:38.211 "small_pool_count": 8192, 00:12:38.211 "large_pool_count": 1024, 00:12:38.211 "small_bufsize": 8192, 00:12:38.211 "large_bufsize": 135168, 00:12:38.211 "enable_numa": false 00:12:38.211 } 00:12:38.211 } 00:12:38.211 ] 00:12:38.211 }, 00:12:38.211 { 00:12:38.211 "subsystem": "sock", 00:12:38.211 "config": [ 00:12:38.211 { 00:12:38.211 "method": "sock_set_default_impl", 00:12:38.211 "params": { 00:12:38.211 "impl_name": "uring" 00:12:38.211 } 00:12:38.211 }, 00:12:38.211 { 00:12:38.211 "method": "sock_impl_set_options", 00:12:38.211 "params": { 00:12:38.211 "impl_name": "ssl", 00:12:38.211 "recv_buf_size": 4096, 00:12:38.211 "send_buf_size": 4096, 00:12:38.212 "enable_recv_pipe": true, 00:12:38.212 "enable_quickack": false, 00:12:38.212 "enable_placement_id": 0, 00:12:38.212 "enable_zerocopy_send_server": true, 00:12:38.212 "enable_zerocopy_send_client": false, 00:12:38.212 "zerocopy_threshold": 0, 00:12:38.212 "tls_version": 0, 00:12:38.212 "enable_ktls": false 00:12:38.212 } 00:12:38.212 }, 00:12:38.212 { 00:12:38.212 "method": "sock_impl_set_options", 00:12:38.212 "params": { 00:12:38.212 "impl_name": "posix", 00:12:38.212 "recv_buf_size": 2097152, 00:12:38.212 "send_buf_size": 2097152, 00:12:38.212 "enable_recv_pipe": true, 00:12:38.212 "enable_quickack": false, 00:12:38.212 "enable_placement_id": 0, 00:12:38.212 "enable_zerocopy_send_server": true, 00:12:38.212 "enable_zerocopy_send_client": false, 00:12:38.212 "zerocopy_threshold": 0, 00:12:38.212 "tls_version": 0, 00:12:38.212 "enable_ktls": false 00:12:38.212 } 00:12:38.212 }, 00:12:38.212 { 00:12:38.212 "method": "sock_impl_set_options", 00:12:38.212 "params": { 00:12:38.212 "impl_name": "uring", 00:12:38.212 "recv_buf_size": 2097152, 00:12:38.212 "send_buf_size": 2097152, 00:12:38.212 "enable_recv_pipe": true, 00:12:38.212 "enable_quickack": false, 00:12:38.212 "enable_placement_id": 0, 00:12:38.212 "enable_zerocopy_send_server": false, 00:12:38.212 "enable_zerocopy_send_client": false, 00:12:38.212 "zerocopy_threshold": 0, 00:12:38.212 "tls_version": 0, 00:12:38.212 "enable_ktls": false 00:12:38.212 } 00:12:38.212 } 00:12:38.212 ] 00:12:38.212 }, 00:12:38.212 { 00:12:38.212 "subsystem": "vmd", 00:12:38.212 "config": [] 00:12:38.212 }, 00:12:38.212 { 00:12:38.212 "subsystem": "accel", 00:12:38.212 "config": [ 00:12:38.212 { 00:12:38.212 "method": "accel_set_options", 00:12:38.212 "params": { 00:12:38.212 "small_cache_size": 128, 00:12:38.212 "large_cache_size": 16, 00:12:38.212 "task_count": 2048, 00:12:38.212 "sequence_count": 2048, 00:12:38.212 "buf_count": 2048 00:12:38.212 } 00:12:38.212 } 00:12:38.212 ] 00:12:38.212 }, 00:12:38.212 { 00:12:38.212 "subsystem": "bdev", 00:12:38.212 "config": [ 00:12:38.212 { 00:12:38.212 "method": "bdev_set_options", 00:12:38.212 "params": { 00:12:38.212 "bdev_io_pool_size": 65535, 00:12:38.212 "bdev_io_cache_size": 256, 00:12:38.212 "bdev_auto_examine": true, 00:12:38.212 "iobuf_small_cache_size": 128, 00:12:38.212 "iobuf_large_cache_size": 16 00:12:38.212 } 00:12:38.212 }, 00:12:38.212 { 00:12:38.212 "method": "bdev_raid_set_options", 00:12:38.212 "params": { 00:12:38.212 "process_window_size_kb": 1024, 00:12:38.212 "process_max_bandwidth_mb_sec": 0 00:12:38.212 } 00:12:38.212 }, 00:12:38.212 { 00:12:38.212 "method": "bdev_iscsi_set_options", 00:12:38.212 "params": { 00:12:38.212 "timeout_sec": 30 00:12:38.212 } 00:12:38.212 }, 00:12:38.212 { 00:12:38.212 "method": "bdev_nvme_set_options", 00:12:38.212 "params": { 00:12:38.212 "action_on_timeout": "none", 00:12:38.212 "timeout_us": 0, 00:12:38.212 "timeout_admin_us": 0, 00:12:38.212 "keep_alive_timeout_ms": 10000, 00:12:38.212 "arbitration_burst": 0, 00:12:38.212 "low_priority_weight": 0, 00:12:38.212 "medium_priority_weight": 0, 00:12:38.212 "high_priority_weight": 0, 00:12:38.212 "nvme_adminq_poll_period_us": 10000, 00:12:38.212 "nvme_ioq_poll_period_us": 0, 00:12:38.212 "io_queue_requests": 512, 00:12:38.212 "delay_cmd_submit": true, 00:12:38.212 "transport_retry_count": 4, 00:12:38.212 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:38.212 "bdev_retry_count": 3, 00:12:38.212 "transport_ack_timeout": 0, 00:12:38.212 "ctrlr_loss_timeout_sec": 0, 00:12:38.212 "reconnect_delay_sec": 0, 00:12:38.212 "fast_io_fail_timeout_sec": 0, 00:12:38.212 "disable_auto_failback": false, 00:12:38.212 "generate_uuids": false, 00:12:38.212 "transport_tos": 0, 00:12:38.212 "nvme_error_stat": false, 00:12:38.212 "rdma_srq_size": 0, 00:12:38.212 "io_path_stat": false, 00:12:38.212 "allow_accel_sequence": false, 00:12:38.212 "rdma_max_cq_size": 0, 00:12:38.212 "rdma_cm_event_timeout_ms": 0, 00:12:38.212 "dhchap_digests": [ 00:12:38.212 "sha256", 00:12:38.212 "sha384", 00:12:38.212 "sha512" 00:12:38.212 ], 00:12:38.212 "dhchap_dhgroups": [ 00:12:38.212 "null", 00:12:38.212 "ffdhe2048", 00:12:38.212 "ffdhe3072", 00:12:38.212 "ffdhe4096", 00:12:38.212 "ffdhe6144", 00:12:38.212 "ffdhe8192" 00:12:38.212 ] 00:12:38.212 } 00:12:38.212 }, 00:12:38.212 { 00:12:38.212 "method": "bdev_nvme_attach_controller", 00:12:38.212 "params": { 00:12:38.212 "name": "nvme0", 00:12:38.212 "trtype": "TCP", 00:12:38.212 "adrfam": "IPv4", 00:12:38.212 "traddr": "10.0.0.3", 00:12:38.212 "trsvcid": "4420", 00:12:38.212 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:38.212 "prchk_reftag": false, 00:12:38.212 "prchk_guard": false, 00:12:38.212 "ctrlr_loss_timeout_sec": 0, 00:12:38.212 "reconnect_delay_sec": 0, 00:12:38.212 "fast_io_fail_timeout_sec": 0, 00:12:38.212 "psk": "key0", 00:12:38.212 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:38.212 "hdgst": false, 00:12:38.212 "ddgst": false, 00:12:38.212 "multipath": "multipath" 00:12:38.212 } 00:12:38.212 }, 00:12:38.212 { 00:12:38.212 "method": "bdev_nvme_set_hotplug", 00:12:38.212 "params": { 00:12:38.212 "period_us": 100000, 00:12:38.212 "enable": false 00:12:38.212 } 00:12:38.212 }, 00:12:38.212 { 00:12:38.212 "method": "bdev_enable_histogram", 00:12:38.212 "params": { 00:12:38.212 "name": "nvme0n1", 00:12:38.212 "enable": true 00:12:38.212 } 00:12:38.212 }, 00:12:38.212 { 00:12:38.212 "method": "bdev_wait_for_examine" 00:12:38.212 } 00:12:38.212 ] 00:12:38.212 }, 00:12:38.212 { 00:12:38.212 "subsystem": "nbd", 00:12:38.212 "config": [] 00:12:38.212 } 00:12:38.212 ] 00:12:38.212 }' 00:12:38.212 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:12:38.212 [2024-11-26 19:46:33.403822] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:12:38.212 [2024-11-26 19:46:33.403884] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71049 ] 00:12:38.469 [2024-11-26 19:46:33.543635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.469 [2024-11-26 19:46:33.578563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:38.469 [2024-11-26 19:46:33.689099] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:38.726 [2024-11-26 19:46:33.725601] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:39.289 19:46:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:39.289 19:46:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:39.289 19:46:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:12:39.289 19:46:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:12:39.289 19:46:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:39.289 19:46:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:39.548 Running I/O for 1 seconds... 00:12:40.479 6270.00 IOPS, 24.49 MiB/s 00:12:40.479 Latency(us) 00:12:40.479 [2024-11-26T19:46:35.726Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:40.479 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:40.479 Verification LBA range: start 0x0 length 0x2000 00:12:40.479 nvme0n1 : 1.01 6336.65 24.75 0.00 0.00 20092.40 2621.44 16131.94 00:12:40.479 [2024-11-26T19:46:35.726Z] =================================================================================================================== 00:12:40.479 [2024-11-26T19:46:35.726Z] Total : 6336.65 24.75 0.00 0.00 20092.40 2621.44 16131.94 00:12:40.479 { 00:12:40.479 "results": [ 00:12:40.479 { 00:12:40.479 "job": "nvme0n1", 00:12:40.479 "core_mask": "0x2", 00:12:40.479 "workload": "verify", 00:12:40.479 "status": "finished", 00:12:40.479 "verify_range": { 00:12:40.479 "start": 0, 00:12:40.479 "length": 8192 00:12:40.479 }, 00:12:40.479 "queue_depth": 128, 00:12:40.479 "io_size": 4096, 00:12:40.479 "runtime": 1.009681, 00:12:40.479 "iops": 6336.654844450871, 00:12:40.479 "mibps": 24.752557986136214, 00:12:40.479 "io_failed": 0, 00:12:40.479 "io_timeout": 0, 00:12:40.479 "avg_latency_us": 20092.399243994518, 00:12:40.479 "min_latency_us": 2621.44, 00:12:40.479 "max_latency_us": 16131.938461538462 00:12:40.479 } 00:12:40.479 ], 00:12:40.479 "core_count": 1 00:12:40.479 } 00:12:40.479 19:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:12:40.479 19:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:12:40.479 19:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:12:40.479 19:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:12:40.479 19:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:12:40.479 19:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:12:40.479 19:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:40.479 19:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:12:40.479 19:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:12:40.479 19:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:12:40.479 19:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:40.479 nvmf_trace.0 00:12:40.479 19:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:12:40.479 19:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 71049 00:12:40.479 19:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71049 ']' 00:12:40.479 19:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71049 00:12:40.479 19:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:40.479 19:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:40.479 19:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71049 00:12:40.479 killing process with pid 71049 00:12:40.479 Received shutdown signal, test time was about 1.000000 seconds 00:12:40.479 00:12:40.479 Latency(us) 00:12:40.479 [2024-11-26T19:46:35.726Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:40.479 [2024-11-26T19:46:35.726Z] =================================================================================================================== 00:12:40.479 [2024-11-26T19:46:35.726Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:40.479 19:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:40.479 19:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:40.479 19:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71049' 00:12:40.479 19:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71049 00:12:40.479 19:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71049 00:12:40.738 19:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:12:40.738 19:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:40.738 19:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:12:40.996 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:40.996 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:12:40.996 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:40.996 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:40.996 rmmod nvme_tcp 00:12:40.996 rmmod nvme_fabrics 00:12:40.996 rmmod nvme_keyring 00:12:40.996 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:40.996 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:12:40.996 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:12:40.996 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 71017 ']' 00:12:40.996 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 71017 00:12:40.996 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71017 ']' 00:12:40.996 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71017 00:12:40.996 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:40.996 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:40.996 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71017 00:12:40.996 killing process with pid 71017 00:12:40.996 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:40.996 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:40.996 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71017' 00:12:40.996 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71017 00:12:40.996 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71017 00:12:41.255 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:41.255 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:41.255 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:41.255 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:12:41.255 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:12:41.255 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:41.255 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:12:41.255 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:41.255 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:41.255 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:41.255 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:41.255 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:41.255 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:41.255 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:41.255 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:41.255 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:41.255 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:41.255 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:41.255 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:41.255 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:41.255 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:41.255 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:41.255 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:41.255 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.255 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:41.255 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:41.515 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:12:41.515 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.zx6RtXmBFc /tmp/tmp.aR2u7Kaluo /tmp/tmp.ZiBJzuZ0GM 00:12:41.515 00:12:41.515 real 1m21.445s 00:12:41.515 user 2m15.403s 00:12:41.515 sys 0m21.294s 00:12:41.515 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:41.515 ************************************ 00:12:41.515 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:41.515 END TEST nvmf_tls 00:12:41.515 ************************************ 00:12:41.515 19:46:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:12:41.515 19:46:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:41.515 19:46:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:41.515 19:46:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:41.515 ************************************ 00:12:41.515 START TEST nvmf_fips 00:12:41.515 ************************************ 00:12:41.515 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:12:41.515 * Looking for test storage... 00:12:41.515 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:12:41.515 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:41.515 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:12:41.515 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:41.515 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:41.515 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:41.515 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:41.515 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:41.515 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:12:41.515 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:12:41.515 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:12:41.515 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:12:41.515 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:12:41.515 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:12:41.515 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:12:41.515 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:41.515 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:12:41.515 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:12:41.515 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:41.515 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:41.515 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:12:41.515 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:12:41.515 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:41.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.516 --rc genhtml_branch_coverage=1 00:12:41.516 --rc genhtml_function_coverage=1 00:12:41.516 --rc genhtml_legend=1 00:12:41.516 --rc geninfo_all_blocks=1 00:12:41.516 --rc geninfo_unexecuted_blocks=1 00:12:41.516 00:12:41.516 ' 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:41.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.516 --rc genhtml_branch_coverage=1 00:12:41.516 --rc genhtml_function_coverage=1 00:12:41.516 --rc genhtml_legend=1 00:12:41.516 --rc geninfo_all_blocks=1 00:12:41.516 --rc geninfo_unexecuted_blocks=1 00:12:41.516 00:12:41.516 ' 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:41.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.516 --rc genhtml_branch_coverage=1 00:12:41.516 --rc genhtml_function_coverage=1 00:12:41.516 --rc genhtml_legend=1 00:12:41.516 --rc geninfo_all_blocks=1 00:12:41.516 --rc geninfo_unexecuted_blocks=1 00:12:41.516 00:12:41.516 ' 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:41.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.516 --rc genhtml_branch_coverage=1 00:12:41.516 --rc genhtml_function_coverage=1 00:12:41.516 --rc genhtml_legend=1 00:12:41.516 --rc geninfo_all_blocks=1 00:12:41.516 --rc geninfo_unexecuted_blocks=1 00:12:41.516 00:12:41.516 ' 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=91838eb1-5852-43eb-90b2-09876f360ab2 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:41.516 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:41.516 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:41.517 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:12:41.517 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:12:41.517 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:12:41.517 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:12:41.517 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:12:41.517 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:12:41.517 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:12:41.517 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:12:41.517 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:12:41.517 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:12:41.517 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:41.517 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:41.517 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:12:41.517 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:41.517 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:12:41.517 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:12:41.517 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:41.517 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:12:41.517 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:12:41.517 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:12:41.517 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:12:41.517 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:12:41.517 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:12:41.517 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:12:41.517 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:41.517 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:12:41.517 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:12:41.517 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:12:41.517 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:12:41.517 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:12:41.517 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:12:41.517 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:12:41.517 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:12:41.517 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:12:41.517 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:12:41.517 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:12:41.517 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:12:41.517 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:12:41.517 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:12:41.517 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:12:41.517 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:12:41.517 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:12:41.776 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:12:41.776 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:12:41.776 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:12:41.776 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:12:41.776 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:12:41.776 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:12:41.776 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:12:41.776 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:12:41.776 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:41.776 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:12:41.776 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:41.776 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:12:41.776 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:41.776 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:12:41.776 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:12:41.776 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:12:41.776 Error setting digest 00:12:41.776 4092F5420A7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:12:41.776 4092F5420A7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:12:41.776 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:12:41.776 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:41.776 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:41.776 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:41.776 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:12:41.776 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:41.776 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:41.776 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:41.776 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:41.776 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:41.776 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.776 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:41.776 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:41.776 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:41.776 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:41.776 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:41.776 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:41.777 Cannot find device "nvmf_init_br" 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:41.777 Cannot find device "nvmf_init_br2" 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:41.777 Cannot find device "nvmf_tgt_br" 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:41.777 Cannot find device "nvmf_tgt_br2" 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:41.777 Cannot find device "nvmf_init_br" 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:41.777 Cannot find device "nvmf_init_br2" 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:41.777 Cannot find device "nvmf_tgt_br" 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:41.777 Cannot find device "nvmf_tgt_br2" 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:41.777 Cannot find device "nvmf_br" 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:41.777 Cannot find device "nvmf_init_if" 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:41.777 Cannot find device "nvmf_init_if2" 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:41.777 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:41.777 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:41.777 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:41.777 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:41.777 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:41.777 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:41.777 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:41.777 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:42.035 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:42.035 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:42.035 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:42.035 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:42.035 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:42.035 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:42.035 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:42.035 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:42.035 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:42.035 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:42.035 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:42.035 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:12:42.035 00:12:42.035 --- 10.0.0.3 ping statistics --- 00:12:42.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.035 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:12:42.035 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:42.036 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:42.036 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.031 ms 00:12:42.036 00:12:42.036 --- 10.0.0.4 ping statistics --- 00:12:42.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.036 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:12:42.036 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:42.036 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:42.036 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:12:42.036 00:12:42.036 --- 10.0.0.1 ping statistics --- 00:12:42.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.036 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:12:42.036 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:42.036 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:42.036 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.039 ms 00:12:42.036 00:12:42.036 --- 10.0.0.2 ping statistics --- 00:12:42.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.036 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:12:42.036 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:42.036 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:12:42.036 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:42.036 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:42.036 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:42.036 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:42.036 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:42.036 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:42.036 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:42.036 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:12:42.036 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:42.036 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:42.036 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:12:42.036 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=71365 00:12:42.036 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 71365 00:12:42.036 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 71365 ']' 00:12:42.036 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.036 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:42.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.036 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.036 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:42.036 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:12:42.036 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:42.036 [2024-11-26 19:46:37.146302] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:12:42.036 [2024-11-26 19:46:37.146370] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:42.294 [2024-11-26 19:46:37.288008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.294 [2024-11-26 19:46:37.324269] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:42.294 [2024-11-26 19:46:37.324311] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:42.294 [2024-11-26 19:46:37.324318] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:42.294 [2024-11-26 19:46:37.324323] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:42.294 [2024-11-26 19:46:37.324327] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:42.294 [2024-11-26 19:46:37.324587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:42.294 [2024-11-26 19:46:37.356395] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:42.860 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:42.860 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:12:42.860 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:42.860 19:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:42.860 19:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:12:42.860 19:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:42.860 19:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:12:42.861 19:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:12:42.861 19:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:12:42.861 19:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.sqz 00:12:42.861 19:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:12:42.861 19:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.sqz 00:12:42.861 19:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.sqz 00:12:42.861 19:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.sqz 00:12:42.861 19:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:43.118 [2024-11-26 19:46:38.226505] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:43.118 [2024-11-26 19:46:38.242452] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:43.118 [2024-11-26 19:46:38.242743] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:43.118 malloc0 00:12:43.118 19:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:43.118 19:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=71401 00:12:43.118 19:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:43.118 19:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 71401 /var/tmp/bdevperf.sock 00:12:43.118 19:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 71401 ']' 00:12:43.118 19:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:43.118 19:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:43.118 19:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:43.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:43.118 19:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:43.118 19:46:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:12:43.118 [2024-11-26 19:46:38.353176] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:12:43.118 [2024-11-26 19:46:38.353383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71401 ] 00:12:43.376 [2024-11-26 19:46:38.492080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.376 [2024-11-26 19:46:38.530441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:43.376 [2024-11-26 19:46:38.562830] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:44.310 19:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:44.310 19:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:12:44.310 19:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.sqz 00:12:44.310 19:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:12:44.567 [2024-11-26 19:46:39.605502] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:44.567 TLSTESTn1 00:12:44.567 19:46:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:44.567 Running I/O for 10 seconds... 00:12:46.874 5612.00 IOPS, 21.92 MiB/s [2024-11-26T19:46:43.054Z] 6006.00 IOPS, 23.46 MiB/s [2024-11-26T19:46:43.997Z] 6234.33 IOPS, 24.35 MiB/s [2024-11-26T19:46:44.953Z] 6462.50 IOPS, 25.24 MiB/s [2024-11-26T19:46:45.884Z] 6604.00 IOPS, 25.80 MiB/s [2024-11-26T19:46:46.815Z] 6693.33 IOPS, 26.15 MiB/s [2024-11-26T19:46:48.188Z] 6738.57 IOPS, 26.32 MiB/s [2024-11-26T19:46:49.122Z] 6764.62 IOPS, 26.42 MiB/s [2024-11-26T19:46:50.055Z] 6809.89 IOPS, 26.60 MiB/s [2024-11-26T19:46:50.055Z] 6842.30 IOPS, 26.73 MiB/s 00:12:54.808 Latency(us) 00:12:54.808 [2024-11-26T19:46:50.055Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:54.808 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:54.808 Verification LBA range: start 0x0 length 0x2000 00:12:54.808 TLSTESTn1 : 10.01 6848.32 26.75 0.00 0.00 18660.85 3428.04 17442.66 00:12:54.808 [2024-11-26T19:46:50.055Z] =================================================================================================================== 00:12:54.808 [2024-11-26T19:46:50.055Z] Total : 6848.32 26.75 0.00 0.00 18660.85 3428.04 17442.66 00:12:54.808 { 00:12:54.808 "results": [ 00:12:54.808 { 00:12:54.808 "job": "TLSTESTn1", 00:12:54.808 "core_mask": "0x4", 00:12:54.808 "workload": "verify", 00:12:54.808 "status": "finished", 00:12:54.808 "verify_range": { 00:12:54.808 "start": 0, 00:12:54.808 "length": 8192 00:12:54.808 }, 00:12:54.808 "queue_depth": 128, 00:12:54.808 "io_size": 4096, 00:12:54.808 "runtime": 10.009309, 00:12:54.808 "iops": 6848.324894355844, 00:12:54.808 "mibps": 26.751269118577515, 00:12:54.808 "io_failed": 0, 00:12:54.808 "io_timeout": 0, 00:12:54.808 "avg_latency_us": 18660.853303348293, 00:12:54.808 "min_latency_us": 3428.036923076923, 00:12:54.808 "max_latency_us": 17442.65846153846 00:12:54.808 } 00:12:54.808 ], 00:12:54.808 "core_count": 1 00:12:54.808 } 00:12:54.808 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:12:54.808 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:12:54.808 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:12:54.809 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:12:54.809 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:12:54.809 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:54.809 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:12:54.809 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:12:54.809 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:12:54.809 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:54.809 nvmf_trace.0 00:12:54.809 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:12:54.809 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 71401 00:12:54.809 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 71401 ']' 00:12:54.809 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 71401 00:12:54.809 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:12:54.809 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:54.809 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71401 00:12:54.809 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:12:54.809 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:12:54.809 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71401' 00:12:54.809 killing process with pid 71401 00:12:54.809 Received shutdown signal, test time was about 10.000000 seconds 00:12:54.809 00:12:54.809 Latency(us) 00:12:54.809 [2024-11-26T19:46:50.056Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:54.809 [2024-11-26T19:46:50.056Z] =================================================================================================================== 00:12:54.809 [2024-11-26T19:46:50.056Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:54.809 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 71401 00:12:54.809 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 71401 00:12:54.809 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:12:54.809 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:54.809 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:12:55.066 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:55.066 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:12:55.066 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:55.066 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:55.066 rmmod nvme_tcp 00:12:55.066 rmmod nvme_fabrics 00:12:55.066 rmmod nvme_keyring 00:12:55.066 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:55.066 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:12:55.066 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:12:55.066 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 71365 ']' 00:12:55.066 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 71365 00:12:55.066 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 71365 ']' 00:12:55.066 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 71365 00:12:55.066 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:12:55.066 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:55.066 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71365 00:12:55.066 killing process with pid 71365 00:12:55.066 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:55.066 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:55.066 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71365' 00:12:55.066 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 71365 00:12:55.066 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 71365 00:12:55.337 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:55.337 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:55.337 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:55.337 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:12:55.337 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:12:55.337 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:55.337 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:12:55.337 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:55.337 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:55.337 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:55.337 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:55.337 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:55.337 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:55.337 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:55.337 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:55.337 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:55.337 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:55.337 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:55.337 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:55.337 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:55.337 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:55.337 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:55.337 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:55.337 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.337 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:55.337 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.597 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:12:55.597 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.sqz 00:12:55.597 ************************************ 00:12:55.597 END TEST nvmf_fips 00:12:55.597 ************************************ 00:12:55.597 00:12:55.597 real 0m14.051s 00:12:55.597 user 0m20.481s 00:12:55.597 sys 0m4.588s 00:12:55.597 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:55.597 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:12:55.597 19:46:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:12:55.598 19:46:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:55.598 19:46:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:55.598 19:46:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:55.598 ************************************ 00:12:55.598 START TEST nvmf_control_msg_list 00:12:55.598 ************************************ 00:12:55.598 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:12:55.598 * Looking for test storage... 00:12:55.598 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:55.598 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:55.598 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:12:55.598 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:55.598 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:55.598 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:55.598 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:55.598 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:55.598 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:12:55.598 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:12:55.598 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:12:55.598 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:12:55.598 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:12:55.598 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:12:55.598 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:12:55.598 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:55.598 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:12:55.598 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:12:55.598 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:55.598 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:55.598 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:12:55.598 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:12:55.598 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:55.598 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:12:55.598 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:12:55.598 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:12:55.598 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:12:55.598 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:55.598 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:12:55.598 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:12:55.598 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:55.598 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:55.598 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:12:55.598 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:55.598 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:55.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.598 --rc genhtml_branch_coverage=1 00:12:55.598 --rc genhtml_function_coverage=1 00:12:55.598 --rc genhtml_legend=1 00:12:55.598 --rc geninfo_all_blocks=1 00:12:55.598 --rc geninfo_unexecuted_blocks=1 00:12:55.598 00:12:55.598 ' 00:12:55.598 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:55.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.598 --rc genhtml_branch_coverage=1 00:12:55.598 --rc genhtml_function_coverage=1 00:12:55.598 --rc genhtml_legend=1 00:12:55.598 --rc geninfo_all_blocks=1 00:12:55.598 --rc geninfo_unexecuted_blocks=1 00:12:55.598 00:12:55.598 ' 00:12:55.598 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:55.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.598 --rc genhtml_branch_coverage=1 00:12:55.598 --rc genhtml_function_coverage=1 00:12:55.598 --rc genhtml_legend=1 00:12:55.598 --rc geninfo_all_blocks=1 00:12:55.598 --rc geninfo_unexecuted_blocks=1 00:12:55.598 00:12:55.598 ' 00:12:55.598 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:55.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.598 --rc genhtml_branch_coverage=1 00:12:55.598 --rc genhtml_function_coverage=1 00:12:55.598 --rc genhtml_legend=1 00:12:55.598 --rc geninfo_all_blocks=1 00:12:55.598 --rc geninfo_unexecuted_blocks=1 00:12:55.598 00:12:55.598 ' 00:12:55.598 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:55.598 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:12:55.598 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:55.598 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:55.598 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:55.598 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:55.598 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:55.598 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:55.598 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:55.598 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:55.598 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:55.598 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:55.599 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:12:55.599 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=91838eb1-5852-43eb-90b2-09876f360ab2 00:12:55.599 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:55.599 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:55.599 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:55.599 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:55.599 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:55.599 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:12:55.599 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:55.599 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:55.599 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:55.599 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.599 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.599 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.599 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:12:55.599 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.599 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:12:55.599 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:55.599 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:55.599 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:55.599 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:55.599 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:55.599 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:55.599 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:55.599 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:55.599 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:55.599 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:55.599 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:12:55.599 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:55.599 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:55.599 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:55.599 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:55.599 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:55.599 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.599 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:55.599 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.599 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:55.599 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:55.599 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:55.599 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:55.599 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:55.599 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:55.599 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:55.599 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:55.599 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:55.599 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:55.599 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:55.599 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:55.599 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:55.599 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:55.599 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:55.599 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:55.599 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:55.599 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:55.599 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:55.599 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:55.600 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:55.600 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:55.600 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:55.600 Cannot find device "nvmf_init_br" 00:12:55.600 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:12:55.600 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:55.600 Cannot find device "nvmf_init_br2" 00:12:55.600 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:12:55.600 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:55.600 Cannot find device "nvmf_tgt_br" 00:12:55.600 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:12:55.600 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:55.600 Cannot find device "nvmf_tgt_br2" 00:12:55.600 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:12:55.600 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:55.600 Cannot find device "nvmf_init_br" 00:12:55.600 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:12:55.600 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:55.600 Cannot find device "nvmf_init_br2" 00:12:55.600 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:12:55.600 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:55.860 Cannot find device "nvmf_tgt_br" 00:12:55.860 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:12:55.860 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:55.860 Cannot find device "nvmf_tgt_br2" 00:12:55.860 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:12:55.860 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:55.860 Cannot find device "nvmf_br" 00:12:55.860 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:12:55.860 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:55.860 Cannot find device "nvmf_init_if" 00:12:55.860 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:12:55.860 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:55.860 Cannot find device "nvmf_init_if2" 00:12:55.860 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:12:55.860 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:55.860 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:55.860 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:12:55.860 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:55.860 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:55.860 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:12:55.860 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:55.860 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:55.860 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:55.860 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:55.860 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:55.860 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:55.861 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:55.861 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:55.861 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:55.861 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:55.861 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:55.861 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:55.861 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:55.861 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:55.861 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:55.861 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:55.861 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:55.861 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:55.861 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:55.861 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:55.861 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:55.861 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:55.861 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:55.861 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:55.861 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:55.861 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:55.861 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:55.861 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:55.861 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:55.861 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:55.861 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:55.861 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:55.861 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:55.861 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:55.861 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:12:55.861 00:12:55.861 --- 10.0.0.3 ping statistics --- 00:12:55.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:55.861 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:12:55.861 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:55.861 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:55.861 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.076 ms 00:12:55.861 00:12:55.861 --- 10.0.0.4 ping statistics --- 00:12:55.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:55.861 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:12:55.861 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:55.861 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:55.861 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:12:55.861 00:12:55.861 --- 10.0.0.1 ping statistics --- 00:12:55.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:55.861 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:12:55.861 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:55.861 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:55.861 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.037 ms 00:12:55.861 00:12:55.861 --- 10.0.0.2 ping statistics --- 00:12:55.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:55.861 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:12:55.861 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:55.861 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:12:55.861 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:55.861 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:55.861 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:55.861 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:55.861 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:55.861 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:55.861 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:55.861 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:12:55.861 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:55.861 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:55.861 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:12:55.861 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=71790 00:12:55.861 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:55.861 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 71790 00:12:55.861 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 71790 ']' 00:12:55.861 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:55.861 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:55.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:55.861 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:55.861 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:55.861 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:12:56.121 [2024-11-26 19:46:51.106179] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:12:56.121 [2024-11-26 19:46:51.106229] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:56.121 [2024-11-26 19:46:51.240059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.121 [2024-11-26 19:46:51.274912] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:56.121 [2024-11-26 19:46:51.275107] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:56.121 [2024-11-26 19:46:51.275171] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:56.121 [2024-11-26 19:46:51.275218] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:56.121 [2024-11-26 19:46:51.275236] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:56.121 [2024-11-26 19:46:51.275526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.121 [2024-11-26 19:46:51.306000] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:57.055 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:57.056 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:12:57.056 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:57.056 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:57.056 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:12:57.056 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:57.056 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:12:57.056 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:12:57.056 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:12:57.056 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.056 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:12:57.056 [2024-11-26 19:46:52.023068] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:57.056 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.056 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:12:57.056 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.056 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:12:57.056 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.056 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:12:57.056 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.056 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:12:57.056 Malloc0 00:12:57.056 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.056 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:12:57.056 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.056 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:12:57.056 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.056 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:12:57.056 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.056 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:12:57.056 [2024-11-26 19:46:52.057750] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:57.056 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.056 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=71822 00:12:57.056 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:12:57.056 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=71823 00:12:57.056 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:12:57.056 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=71824 00:12:57.056 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 71822 00:12:57.056 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:12:57.056 [2024-11-26 19:46:52.236245] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:57.056 [2024-11-26 19:46:52.236623] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:57.056 [2024-11-26 19:46:52.236812] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:58.430 Initializing NVMe Controllers 00:12:58.430 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:12:58.430 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:12:58.430 Initialization complete. Launching workers. 00:12:58.430 ======================================================== 00:12:58.430 Latency(us) 00:12:58.430 Device Information : IOPS MiB/s Average min max 00:12:58.430 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 4317.00 16.86 231.36 177.13 690.36 00:12:58.430 ======================================================== 00:12:58.430 Total : 4317.00 16.86 231.36 177.13 690.36 00:12:58.430 00:12:58.430 Initializing NVMe Controllers 00:12:58.430 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:12:58.430 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:12:58.430 Initialization complete. Launching workers. 00:12:58.430 ======================================================== 00:12:58.430 Latency(us) 00:12:58.430 Device Information : IOPS MiB/s Average min max 00:12:58.430 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 4327.00 16.90 230.86 161.81 380.62 00:12:58.430 ======================================================== 00:12:58.430 Total : 4327.00 16.90 230.86 161.81 380.62 00:12:58.430 00:12:58.430 Initializing NVMe Controllers 00:12:58.430 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:12:58.430 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:12:58.430 Initialization complete. Launching workers. 00:12:58.430 ======================================================== 00:12:58.430 Latency(us) 00:12:58.430 Device Information : IOPS MiB/s Average min max 00:12:58.430 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 4327.00 16.90 230.85 161.55 380.16 00:12:58.430 ======================================================== 00:12:58.430 Total : 4327.00 16.90 230.85 161.55 380.16 00:12:58.430 00:12:58.430 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 71823 00:12:58.430 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 71824 00:12:58.430 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:12:58.430 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:12:58.430 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:58.430 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:12:58.430 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:58.430 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:12:58.430 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:58.431 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:58.431 rmmod nvme_tcp 00:12:58.431 rmmod nvme_fabrics 00:12:58.431 rmmod nvme_keyring 00:12:58.431 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:58.431 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:12:58.431 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:12:58.431 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 71790 ']' 00:12:58.431 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 71790 00:12:58.431 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 71790 ']' 00:12:58.431 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 71790 00:12:58.431 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:12:58.431 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:58.431 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71790 00:12:58.431 killing process with pid 71790 00:12:58.431 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:58.431 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:58.431 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71790' 00:12:58.431 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 71790 00:12:58.431 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 71790 00:12:58.431 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:58.431 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:58.431 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:58.431 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:12:58.431 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:12:58.431 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:58.431 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:12:58.431 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:58.431 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:58.431 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:58.431 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:58.431 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:58.431 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:58.431 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:58.431 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:58.431 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:58.431 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:58.431 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:58.431 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:58.431 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:58.431 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:58.689 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:58.689 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:58.689 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.689 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:58.689 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.689 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:12:58.689 00:12:58.689 real 0m3.096s 00:12:58.689 user 0m5.392s 00:12:58.689 sys 0m0.989s 00:12:58.689 ************************************ 00:12:58.689 END TEST nvmf_control_msg_list 00:12:58.689 ************************************ 00:12:58.689 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:58.689 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:12:58.689 19:46:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:58.690 ************************************ 00:12:58.690 START TEST nvmf_wait_for_buf 00:12:58.690 ************************************ 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:12:58.690 * Looking for test storage... 00:12:58.690 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:58.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.690 --rc genhtml_branch_coverage=1 00:12:58.690 --rc genhtml_function_coverage=1 00:12:58.690 --rc genhtml_legend=1 00:12:58.690 --rc geninfo_all_blocks=1 00:12:58.690 --rc geninfo_unexecuted_blocks=1 00:12:58.690 00:12:58.690 ' 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:58.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.690 --rc genhtml_branch_coverage=1 00:12:58.690 --rc genhtml_function_coverage=1 00:12:58.690 --rc genhtml_legend=1 00:12:58.690 --rc geninfo_all_blocks=1 00:12:58.690 --rc geninfo_unexecuted_blocks=1 00:12:58.690 00:12:58.690 ' 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:58.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.690 --rc genhtml_branch_coverage=1 00:12:58.690 --rc genhtml_function_coverage=1 00:12:58.690 --rc genhtml_legend=1 00:12:58.690 --rc geninfo_all_blocks=1 00:12:58.690 --rc geninfo_unexecuted_blocks=1 00:12:58.690 00:12:58.690 ' 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:58.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.690 --rc genhtml_branch_coverage=1 00:12:58.690 --rc genhtml_function_coverage=1 00:12:58.690 --rc genhtml_legend=1 00:12:58.690 --rc geninfo_all_blocks=1 00:12:58.690 --rc geninfo_unexecuted_blocks=1 00:12:58.690 00:12:58.690 ' 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=91838eb1-5852-43eb-90b2-09876f360ab2 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:58.690 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:58.691 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:58.691 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:58.691 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:58.691 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:12:58.691 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:58.691 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:58.691 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:58.691 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.691 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.691 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.691 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:12:58.691 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.691 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:12:58.691 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:58.691 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:58.691 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:58.691 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:58.691 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:58.691 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:58.691 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:58.691 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:58.691 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:58.691 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:58.691 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:12:58.691 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:58.691 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:58.691 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:58.691 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:58.691 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:58.691 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.691 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:58.691 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.691 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:58.691 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:58.691 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:58.691 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:58.691 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:58.691 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:58.691 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:58.691 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:58.691 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:58.691 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:58.691 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:58.691 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:58.691 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:58.691 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:58.691 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:58.691 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:58.691 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:58.691 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:58.691 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:58.691 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:58.691 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:58.691 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:58.691 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:58.951 Cannot find device "nvmf_init_br" 00:12:58.951 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:12:58.951 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:58.951 Cannot find device "nvmf_init_br2" 00:12:58.951 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:12:58.951 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:58.951 Cannot find device "nvmf_tgt_br" 00:12:58.951 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:12:58.951 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:58.951 Cannot find device "nvmf_tgt_br2" 00:12:58.951 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:12:58.951 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:58.951 Cannot find device "nvmf_init_br" 00:12:58.951 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:12:58.951 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:58.951 Cannot find device "nvmf_init_br2" 00:12:58.951 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:12:58.951 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:58.951 Cannot find device "nvmf_tgt_br" 00:12:58.951 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:12:58.951 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:58.951 Cannot find device "nvmf_tgt_br2" 00:12:58.951 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:12:58.951 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:58.951 Cannot find device "nvmf_br" 00:12:58.951 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:12:58.951 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:58.951 Cannot find device "nvmf_init_if" 00:12:58.951 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:12:58.951 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:58.951 Cannot find device "nvmf_init_if2" 00:12:58.951 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:12:58.951 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:58.951 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:58.951 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:12:58.951 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:58.951 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:58.951 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:12:58.951 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:58.951 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:58.952 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:58.952 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:58.952 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:58.952 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:58.952 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:58.952 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:58.952 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:58.952 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:58.952 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:58.952 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:58.952 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:58.952 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:58.952 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:58.952 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:58.952 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:58.952 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:58.952 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:58.952 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:58.952 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:58.952 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:58.952 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:58.952 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:58.952 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:58.952 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:58.952 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:58.952 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:58.952 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:58.952 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:59.211 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:59.211 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:59.211 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:59.211 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:59.211 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:12:59.211 00:12:59.211 --- 10.0.0.3 ping statistics --- 00:12:59.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.211 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:12:59.211 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:59.211 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:59.211 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:12:59.211 00:12:59.211 --- 10.0.0.4 ping statistics --- 00:12:59.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.211 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:12:59.211 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:59.211 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:59.211 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.015 ms 00:12:59.211 00:12:59.212 --- 10.0.0.1 ping statistics --- 00:12:59.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.212 rtt min/avg/max/mdev = 0.015/0.015/0.015/0.000 ms 00:12:59.212 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:59.212 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:59.212 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.036 ms 00:12:59.212 00:12:59.212 --- 10.0.0.2 ping statistics --- 00:12:59.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.212 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:12:59.212 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:59.212 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:12:59.212 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:59.212 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:59.212 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:59.212 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:59.212 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:59.212 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:59.212 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:59.212 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:12:59.212 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:59.212 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:59.212 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:12:59.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.212 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=72055 00:12:59.212 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 72055 00:12:59.212 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 72055 ']' 00:12:59.212 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.212 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:59.212 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.212 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:12:59.212 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:59.212 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:12:59.212 [2024-11-26 19:46:54.262532] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:12:59.212 [2024-11-26 19:46:54.262590] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:59.212 [2024-11-26 19:46:54.399966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.212 [2024-11-26 19:46:54.436926] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:59.212 [2024-11-26 19:46:54.437115] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:59.212 [2024-11-26 19:46:54.437181] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:59.212 [2024-11-26 19:46:54.437208] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:59.212 [2024-11-26 19:46:54.437224] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:59.212 [2024-11-26 19:46:54.437525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.146 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:00.146 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:13:00.146 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:00.146 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:00.146 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:00.146 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:00.146 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:13:00.146 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:13:00.146 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:13:00.146 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.146 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:00.146 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.146 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:13:00.146 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.146 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:00.146 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.146 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:13:00.146 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.146 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:00.146 [2024-11-26 19:46:55.224527] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:00.146 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.146 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:13:00.146 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.146 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:00.146 Malloc0 00:13:00.146 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.146 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:13:00.146 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.146 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:00.146 [2024-11-26 19:46:55.268275] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:00.146 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.146 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:13:00.146 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.146 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:00.146 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.146 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:13:00.146 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.146 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:00.146 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.146 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:13:00.146 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.146 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:00.146 [2024-11-26 19:46:55.292331] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:00.146 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.146 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:13:00.405 [2024-11-26 19:46:55.487849] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:01.778 Initializing NVMe Controllers 00:13:01.778 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:13:01.778 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:13:01.778 Initialization complete. Launching workers. 00:13:01.778 ======================================================== 00:13:01.778 Latency(us) 00:13:01.778 Device Information : IOPS MiB/s Average min max 00:13:01.778 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 504.00 63.00 7992.26 6304.60 8695.99 00:13:01.778 ======================================================== 00:13:01.778 Total : 504.00 63.00 7992.26 6304.60 8695.99 00:13:01.778 00:13:01.778 19:46:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:13:01.778 19:46:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.778 19:46:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:01.778 19:46:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:13:01.778 19:46:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.778 19:46:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4788 00:13:01.778 19:46:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4788 -eq 0 ]] 00:13:01.778 19:46:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:13:01.778 19:46:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:13:01.778 19:46:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:01.778 19:46:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:13:01.778 19:46:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:01.778 19:46:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:13:01.778 19:46:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:01.778 19:46:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:01.778 rmmod nvme_tcp 00:13:01.778 rmmod nvme_fabrics 00:13:01.778 rmmod nvme_keyring 00:13:01.778 19:46:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:01.778 19:46:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:13:01.778 19:46:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:13:01.778 19:46:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 72055 ']' 00:13:01.778 19:46:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 72055 00:13:01.778 19:46:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 72055 ']' 00:13:01.778 19:46:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 72055 00:13:01.778 19:46:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:13:01.778 19:46:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:01.778 19:46:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72055 00:13:01.778 killing process with pid 72055 00:13:01.778 19:46:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:01.778 19:46:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:01.778 19:46:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72055' 00:13:01.778 19:46:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 72055 00:13:01.778 19:46:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 72055 00:13:02.036 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:02.036 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:02.036 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:02.036 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:13:02.036 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:13:02.036 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:02.036 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:13:02.036 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:02.036 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:02.036 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:02.036 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:02.036 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:02.036 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:02.036 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:02.036 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:02.036 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:02.036 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:02.036 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:02.036 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:02.036 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:02.036 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:02.036 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:02.036 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:02.036 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:02.036 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:02.036 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:02.295 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:13:02.295 00:13:02.295 real 0m3.520s 00:13:02.295 user 0m3.086s 00:13:02.295 sys 0m0.626s 00:13:02.295 ************************************ 00:13:02.295 END TEST nvmf_wait_for_buf 00:13:02.295 ************************************ 00:13:02.295 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:02.295 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:02.295 19:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:13:02.295 19:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:13:02.295 19:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:13:02.295 19:46:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:02.295 19:46:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:02.295 19:46:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:02.295 ************************************ 00:13:02.295 START TEST nvmf_nsid 00:13:02.295 ************************************ 00:13:02.295 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:13:02.295 * Looking for test storage... 00:13:02.295 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:02.295 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:02.295 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:02.295 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:13:02.295 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:02.295 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:02.295 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:02.295 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:02.295 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:13:02.295 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:13:02.295 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:13:02.295 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:13:02.295 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:13:02.295 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:13:02.295 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:13:02.295 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:02.295 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:13:02.295 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:13:02.295 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:02.295 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:02.295 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:13:02.295 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:02.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.296 --rc genhtml_branch_coverage=1 00:13:02.296 --rc genhtml_function_coverage=1 00:13:02.296 --rc genhtml_legend=1 00:13:02.296 --rc geninfo_all_blocks=1 00:13:02.296 --rc geninfo_unexecuted_blocks=1 00:13:02.296 00:13:02.296 ' 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:02.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.296 --rc genhtml_branch_coverage=1 00:13:02.296 --rc genhtml_function_coverage=1 00:13:02.296 --rc genhtml_legend=1 00:13:02.296 --rc geninfo_all_blocks=1 00:13:02.296 --rc geninfo_unexecuted_blocks=1 00:13:02.296 00:13:02.296 ' 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:02.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.296 --rc genhtml_branch_coverage=1 00:13:02.296 --rc genhtml_function_coverage=1 00:13:02.296 --rc genhtml_legend=1 00:13:02.296 --rc geninfo_all_blocks=1 00:13:02.296 --rc geninfo_unexecuted_blocks=1 00:13:02.296 00:13:02.296 ' 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:02.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.296 --rc genhtml_branch_coverage=1 00:13:02.296 --rc genhtml_function_coverage=1 00:13:02.296 --rc genhtml_legend=1 00:13:02.296 --rc geninfo_all_blocks=1 00:13:02.296 --rc geninfo_unexecuted_blocks=1 00:13:02.296 00:13:02.296 ' 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=91838eb1-5852-43eb-90b2-09876f360ab2 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:02.296 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:02.296 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:02.297 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:02.297 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:02.297 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:02.297 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:02.297 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:02.297 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:02.297 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:02.297 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:02.297 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:02.297 Cannot find device "nvmf_init_br" 00:13:02.297 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:13:02.297 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:02.297 Cannot find device "nvmf_init_br2" 00:13:02.297 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:13:02.297 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:02.297 Cannot find device "nvmf_tgt_br" 00:13:02.297 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:13:02.297 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:02.555 Cannot find device "nvmf_tgt_br2" 00:13:02.555 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:13:02.555 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:02.555 Cannot find device "nvmf_init_br" 00:13:02.555 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:13:02.555 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:02.556 Cannot find device "nvmf_init_br2" 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:02.556 Cannot find device "nvmf_tgt_br" 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:02.556 Cannot find device "nvmf_tgt_br2" 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:02.556 Cannot find device "nvmf_br" 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:02.556 Cannot find device "nvmf_init_if" 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:02.556 Cannot find device "nvmf_init_if2" 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:02.556 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:02.556 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:02.556 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:02.556 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:13:02.556 00:13:02.556 --- 10.0.0.3 ping statistics --- 00:13:02.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.556 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:02.556 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:02.556 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.035 ms 00:13:02.556 00:13:02.556 --- 10.0.0.4 ping statistics --- 00:13:02.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.556 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:02.556 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:02.556 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:13:02.556 00:13:02.556 --- 10.0.0.1 ping statistics --- 00:13:02.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.556 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:02.556 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:02.556 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:13:02.556 00:13:02.556 --- 10.0.0.2 ping statistics --- 00:13:02.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.556 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=72318 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 72318 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 72318 ']' 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:02.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:13:02.556 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:13:02.815 [2024-11-26 19:46:57.835812] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:13:02.815 [2024-11-26 19:46:57.835876] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:02.815 [2024-11-26 19:46:57.973186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.815 [2024-11-26 19:46:58.007958] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:02.815 [2024-11-26 19:46:58.007997] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:02.815 [2024-11-26 19:46:58.008004] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:02.815 [2024-11-26 19:46:58.008009] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:02.815 [2024-11-26 19:46:58.008014] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:02.815 [2024-11-26 19:46:58.008272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.815 [2024-11-26 19:46:58.038353] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:03.749 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:03.749 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:13:03.749 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:03.749 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:03.749 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:13:03.749 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:03.749 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:03.749 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=72350 00:13:03.749 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:13:03.749 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:13:03.749 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:13:03.749 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:13:03.749 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:13:03.749 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:13:03.749 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:03.749 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:03.749 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:13:03.749 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:03.749 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:13:03.749 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:13:03.749 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:13:03.749 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:13:03.749 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:13:03.749 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=0877e90f-eebe-4c86-a725-1120c8d7f397 00:13:03.749 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:13:03.749 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=ac10da62-e575-4443-a6fd-bae722e34781 00:13:03.749 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:13:03.749 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=c41eb5d2-5a96-4a2f-b93f-5649794ed873 00:13:03.749 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:13:03.749 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.749 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:13:03.749 null0 00:13:03.749 null1 00:13:03.749 [2024-11-26 19:46:58.766899] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:13:03.749 [2024-11-26 19:46:58.766975] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72350 ] 00:13:03.749 null2 00:13:03.749 [2024-11-26 19:46:58.772169] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:03.749 [2024-11-26 19:46:58.796249] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:03.749 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.749 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 72350 /var/tmp/tgt2.sock 00:13:03.749 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 72350 ']' 00:13:03.749 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:13:03.749 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:03.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:13:03.749 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:13:03.749 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:03.749 19:46:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:13:03.749 [2024-11-26 19:46:58.908170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.749 [2024-11-26 19:46:58.943970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:03.749 [2024-11-26 19:46:58.988281] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:04.007 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:04.007 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:13:04.007 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:13:04.265 [2024-11-26 19:46:59.448120] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:04.265 [2024-11-26 19:46:59.464216] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:13:04.265 nvme0n1 nvme0n2 00:13:04.265 nvme1n1 00:13:04.523 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:13:04.523 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:13:04.523 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid=91838eb1-5852-43eb-90b2-09876f360ab2 00:13:04.523 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:13:04.523 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:13:04.523 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:13:04.523 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:13:04.523 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:13:04.523 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:13:04.523 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:13:04.523 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:13:04.523 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:13:04.523 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:13:04.523 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:13:04.523 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:13:04.523 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:13:05.457 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:13:05.457 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:13:05.457 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:13:05.457 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:13:05.457 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:13:05.457 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 0877e90f-eebe-4c86-a725-1120c8d7f397 00:13:05.457 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:13:05.457 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:13:05.457 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:13:05.457 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:13:05.457 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:13:05.457 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=0877e90feebe4c86a7251120c8d7f397 00:13:05.457 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 0877E90FEEBE4C86A7251120C8D7F397 00:13:05.457 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 0877E90FEEBE4C86A7251120C8D7F397 == \0\8\7\7\E\9\0\F\E\E\B\E\4\C\8\6\A\7\2\5\1\1\2\0\C\8\D\7\F\3\9\7 ]] 00:13:05.457 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:13:05.457 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:13:05.457 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:13:05.715 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:13:05.715 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:13:05.715 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:13:05.715 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:13:05.715 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid ac10da62-e575-4443-a6fd-bae722e34781 00:13:05.715 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:13:05.715 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:13:05.715 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:13:05.715 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:13:05.715 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:13:05.715 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=ac10da62e5754443a6fdbae722e34781 00:13:05.715 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo AC10DA62E5754443A6FDBAE722E34781 00:13:05.715 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ AC10DA62E5754443A6FDBAE722E34781 == \A\C\1\0\D\A\6\2\E\5\7\5\4\4\4\3\A\6\F\D\B\A\E\7\2\2\E\3\4\7\8\1 ]] 00:13:05.715 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:13:05.715 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:13:05.715 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:13:05.715 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:13:05.715 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:13:05.715 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:13:05.715 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:13:05.715 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid c41eb5d2-5a96-4a2f-b93f-5649794ed873 00:13:05.715 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:13:05.715 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:13:05.715 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:13:05.715 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:13:05.715 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:13:05.715 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=c41eb5d25a964a2fb93f5649794ed873 00:13:05.715 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo C41EB5D25A964A2FB93F5649794ED873 00:13:05.715 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ C41EB5D25A964A2FB93F5649794ED873 == \C\4\1\E\B\5\D\2\5\A\9\6\4\A\2\F\B\9\3\F\5\6\4\9\7\9\4\E\D\8\7\3 ]] 00:13:05.715 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:13:05.973 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:13:05.973 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:13:05.973 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 72350 00:13:05.973 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 72350 ']' 00:13:05.973 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 72350 00:13:05.973 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:13:05.973 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:05.973 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72350 00:13:05.973 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:05.973 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:05.973 killing process with pid 72350 00:13:05.973 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72350' 00:13:05.973 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 72350 00:13:05.973 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 72350 00:13:05.973 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:13:05.973 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:05.973 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:13:06.231 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:06.231 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:13:06.231 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:06.231 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:06.231 rmmod nvme_tcp 00:13:06.231 rmmod nvme_fabrics 00:13:06.231 rmmod nvme_keyring 00:13:06.231 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:06.231 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:13:06.231 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:13:06.231 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 72318 ']' 00:13:06.231 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 72318 00:13:06.231 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 72318 ']' 00:13:06.231 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 72318 00:13:06.231 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:13:06.231 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:06.231 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72318 00:13:06.231 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:06.231 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:06.231 killing process with pid 72318 00:13:06.231 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72318' 00:13:06.231 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 72318 00:13:06.231 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 72318 00:13:06.231 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:06.231 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:06.231 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:06.231 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:13:06.231 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:06.231 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:13:06.231 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:13:06.231 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:06.231 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:06.231 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:06.231 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:06.231 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:06.489 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:06.489 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:06.489 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:06.489 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:06.489 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:06.489 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:06.489 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:06.489 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:06.489 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:06.489 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:06.489 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:06.489 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:06.489 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:06.489 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:06.489 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:13:06.489 00:13:06.489 real 0m4.358s 00:13:06.489 user 0m6.264s 00:13:06.489 sys 0m1.236s 00:13:06.489 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:06.489 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:13:06.489 ************************************ 00:13:06.489 END TEST nvmf_nsid 00:13:06.489 ************************************ 00:13:06.489 19:47:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:13:06.489 ************************************ 00:13:06.489 END TEST nvmf_target_extra 00:13:06.489 00:13:06.489 real 4m23.266s 00:13:06.489 user 9m1.085s 00:13:06.490 sys 0m50.068s 00:13:06.490 19:47:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:06.490 19:47:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:06.490 ************************************ 00:13:06.751 19:47:01 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:13:06.751 19:47:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:06.751 19:47:01 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:06.751 19:47:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:06.751 ************************************ 00:13:06.751 START TEST nvmf_host 00:13:06.751 ************************************ 00:13:06.751 19:47:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:13:06.751 * Looking for test storage... 00:13:06.751 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:13:06.751 19:47:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:06.751 19:47:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:13:06.751 19:47:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:06.751 19:47:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:06.751 19:47:01 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:06.751 19:47:01 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:06.751 19:47:01 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:06.751 19:47:01 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:13:06.751 19:47:01 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:13:06.751 19:47:01 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:13:06.751 19:47:01 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:13:06.751 19:47:01 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:13:06.751 19:47:01 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:13:06.751 19:47:01 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:13:06.751 19:47:01 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:06.751 19:47:01 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:13:06.751 19:47:01 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:13:06.751 19:47:01 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:06.751 19:47:01 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:06.751 19:47:01 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:13:06.751 19:47:01 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:13:06.751 19:47:01 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:06.751 19:47:01 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:13:06.751 19:47:01 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:13:06.751 19:47:01 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:13:06.751 19:47:01 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:13:06.751 19:47:01 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:06.751 19:47:01 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:13:06.751 19:47:01 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:13:06.751 19:47:01 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:06.751 19:47:01 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:06.751 19:47:01 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:13:06.751 19:47:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:06.751 19:47:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:06.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:06.751 --rc genhtml_branch_coverage=1 00:13:06.751 --rc genhtml_function_coverage=1 00:13:06.751 --rc genhtml_legend=1 00:13:06.751 --rc geninfo_all_blocks=1 00:13:06.751 --rc geninfo_unexecuted_blocks=1 00:13:06.751 00:13:06.751 ' 00:13:06.751 19:47:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:06.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:06.751 --rc genhtml_branch_coverage=1 00:13:06.751 --rc genhtml_function_coverage=1 00:13:06.751 --rc genhtml_legend=1 00:13:06.751 --rc geninfo_all_blocks=1 00:13:06.751 --rc geninfo_unexecuted_blocks=1 00:13:06.751 00:13:06.751 ' 00:13:06.751 19:47:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:06.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:06.751 --rc genhtml_branch_coverage=1 00:13:06.751 --rc genhtml_function_coverage=1 00:13:06.751 --rc genhtml_legend=1 00:13:06.751 --rc geninfo_all_blocks=1 00:13:06.752 --rc geninfo_unexecuted_blocks=1 00:13:06.752 00:13:06.752 ' 00:13:06.752 19:47:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:06.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:06.752 --rc genhtml_branch_coverage=1 00:13:06.752 --rc genhtml_function_coverage=1 00:13:06.752 --rc genhtml_legend=1 00:13:06.752 --rc geninfo_all_blocks=1 00:13:06.752 --rc geninfo_unexecuted_blocks=1 00:13:06.752 00:13:06.752 ' 00:13:06.752 19:47:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:06.752 19:47:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:13:06.752 19:47:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:06.752 19:47:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:06.752 19:47:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:06.752 19:47:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:06.752 19:47:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:06.752 19:47:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:06.752 19:47:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:06.752 19:47:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:06.752 19:47:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:06.752 19:47:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:06.752 19:47:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:13:06.752 19:47:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=91838eb1-5852-43eb-90b2-09876f360ab2 00:13:06.752 19:47:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:06.752 19:47:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:06.752 19:47:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:06.752 19:47:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:06.752 19:47:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:06.752 19:47:01 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:13:06.752 19:47:01 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:06.752 19:47:01 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:06.752 19:47:01 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:06.752 19:47:01 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.752 19:47:01 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.752 19:47:01 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.752 19:47:01 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:13:06.752 19:47:01 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.752 19:47:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:13:06.752 19:47:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:06.752 19:47:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:06.752 19:47:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:06.752 19:47:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:06.752 19:47:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:06.752 19:47:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:06.752 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:06.752 19:47:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:06.752 19:47:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:06.752 19:47:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:06.752 19:47:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:13:06.752 19:47:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:13:06.752 19:47:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:13:06.752 19:47:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:13:06.752 19:47:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:06.752 19:47:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:06.752 19:47:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:13:06.752 ************************************ 00:13:06.752 START TEST nvmf_identify 00:13:06.752 ************************************ 00:13:06.752 19:47:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:13:07.012 * Looking for test storage... 00:13:07.012 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:07.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.012 --rc genhtml_branch_coverage=1 00:13:07.012 --rc genhtml_function_coverage=1 00:13:07.012 --rc genhtml_legend=1 00:13:07.012 --rc geninfo_all_blocks=1 00:13:07.012 --rc geninfo_unexecuted_blocks=1 00:13:07.012 00:13:07.012 ' 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:07.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.012 --rc genhtml_branch_coverage=1 00:13:07.012 --rc genhtml_function_coverage=1 00:13:07.012 --rc genhtml_legend=1 00:13:07.012 --rc geninfo_all_blocks=1 00:13:07.012 --rc geninfo_unexecuted_blocks=1 00:13:07.012 00:13:07.012 ' 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:07.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.012 --rc genhtml_branch_coverage=1 00:13:07.012 --rc genhtml_function_coverage=1 00:13:07.012 --rc genhtml_legend=1 00:13:07.012 --rc geninfo_all_blocks=1 00:13:07.012 --rc geninfo_unexecuted_blocks=1 00:13:07.012 00:13:07.012 ' 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:07.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.012 --rc genhtml_branch_coverage=1 00:13:07.012 --rc genhtml_function_coverage=1 00:13:07.012 --rc genhtml_legend=1 00:13:07.012 --rc geninfo_all_blocks=1 00:13:07.012 --rc geninfo_unexecuted_blocks=1 00:13:07.012 00:13:07.012 ' 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=91838eb1-5852-43eb-90b2-09876f360ab2 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.012 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:07.013 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:07.013 Cannot find device "nvmf_init_br" 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:07.013 Cannot find device "nvmf_init_br2" 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:07.013 Cannot find device "nvmf_tgt_br" 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:07.013 Cannot find device "nvmf_tgt_br2" 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:07.013 Cannot find device "nvmf_init_br" 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:07.013 Cannot find device "nvmf_init_br2" 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:07.013 Cannot find device "nvmf_tgt_br" 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:07.013 Cannot find device "nvmf_tgt_br2" 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:07.013 Cannot find device "nvmf_br" 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:07.013 Cannot find device "nvmf_init_if" 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:07.013 Cannot find device "nvmf_init_if2" 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:07.013 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:07.013 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:07.013 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:07.014 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:07.272 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:07.272 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:07.272 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:07.272 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:07.272 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:07.272 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:07.272 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:07.272 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:07.272 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:07.272 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:07.272 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:07.272 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:07.272 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:07.272 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:07.272 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:07.272 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:07.272 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:07.272 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:07.272 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:07.272 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:07.272 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:07.272 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:07.272 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:07.272 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:07.272 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:07.272 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:07.272 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:07.272 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:07.272 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:13:07.272 00:13:07.272 --- 10.0.0.3 ping statistics --- 00:13:07.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:07.272 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:13:07.272 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:07.272 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:07.272 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.034 ms 00:13:07.272 00:13:07.272 --- 10.0.0.4 ping statistics --- 00:13:07.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:07.272 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:13:07.272 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:07.272 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:07.272 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:13:07.272 00:13:07.272 --- 10.0.0.1 ping statistics --- 00:13:07.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:07.272 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:13:07.272 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:07.272 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:07.272 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:13:07.272 00:13:07.272 --- 10.0.0.2 ping statistics --- 00:13:07.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:07.272 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:13:07.272 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:07.272 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:13:07.272 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:07.272 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:07.272 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:07.272 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:07.272 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:07.272 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:07.272 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:07.272 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:13:07.272 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:07.272 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:07.272 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=72704 00:13:07.272 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:07.272 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 72704 00:13:07.272 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 72704 ']' 00:13:07.272 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:07.272 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:07.272 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:07.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:07.272 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:07.272 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:07.272 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:07.272 [2024-11-26 19:47:02.444151] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:13:07.272 [2024-11-26 19:47:02.444232] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:07.531 [2024-11-26 19:47:02.589247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:07.531 [2024-11-26 19:47:02.626920] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:07.531 [2024-11-26 19:47:02.626964] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:07.531 [2024-11-26 19:47:02.626970] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:07.531 [2024-11-26 19:47:02.626975] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:07.531 [2024-11-26 19:47:02.626979] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:07.531 [2024-11-26 19:47:02.627703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:07.531 [2024-11-26 19:47:02.627819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:07.531 [2024-11-26 19:47:02.627877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.531 [2024-11-26 19:47:02.627880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:07.531 [2024-11-26 19:47:02.658692] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:07.531 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:07.531 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:13:07.531 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:07.531 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.531 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:07.531 [2024-11-26 19:47:02.712047] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:07.531 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.531 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:13:07.531 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:07.531 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:07.531 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:07.531 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.531 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:07.792 Malloc0 00:13:07.792 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.792 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:07.792 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.792 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:07.792 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.792 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:13:07.792 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.792 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:07.792 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.792 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:07.792 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.792 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:07.792 [2024-11-26 19:47:02.808509] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:07.792 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.792 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:13:07.792 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.792 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:07.792 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.792 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:13:07.792 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.792 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:07.792 [ 00:13:07.792 { 00:13:07.792 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:07.792 "subtype": "Discovery", 00:13:07.792 "listen_addresses": [ 00:13:07.792 { 00:13:07.792 "trtype": "TCP", 00:13:07.792 "adrfam": "IPv4", 00:13:07.792 "traddr": "10.0.0.3", 00:13:07.792 "trsvcid": "4420" 00:13:07.792 } 00:13:07.792 ], 00:13:07.792 "allow_any_host": true, 00:13:07.792 "hosts": [] 00:13:07.792 }, 00:13:07.792 { 00:13:07.792 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:07.792 "subtype": "NVMe", 00:13:07.792 "listen_addresses": [ 00:13:07.792 { 00:13:07.792 "trtype": "TCP", 00:13:07.792 "adrfam": "IPv4", 00:13:07.792 "traddr": "10.0.0.3", 00:13:07.792 "trsvcid": "4420" 00:13:07.792 } 00:13:07.792 ], 00:13:07.792 "allow_any_host": true, 00:13:07.792 "hosts": [], 00:13:07.792 "serial_number": "SPDK00000000000001", 00:13:07.792 "model_number": "SPDK bdev Controller", 00:13:07.792 "max_namespaces": 32, 00:13:07.792 "min_cntlid": 1, 00:13:07.792 "max_cntlid": 65519, 00:13:07.792 "namespaces": [ 00:13:07.792 { 00:13:07.792 "nsid": 1, 00:13:07.792 "bdev_name": "Malloc0", 00:13:07.792 "name": "Malloc0", 00:13:07.792 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:13:07.792 "eui64": "ABCDEF0123456789", 00:13:07.792 "uuid": "606a7f72-657a-4957-b297-d5d9d024de94" 00:13:07.792 } 00:13:07.792 ] 00:13:07.792 } 00:13:07.792 ] 00:13:07.792 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.793 19:47:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:13:07.793 [2024-11-26 19:47:02.854059] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:13:07.793 [2024-11-26 19:47:02.854096] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72726 ] 00:13:07.793 [2024-11-26 19:47:03.005272] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:13:07.793 [2024-11-26 19:47:03.005340] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:13:07.793 [2024-11-26 19:47:03.005345] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:13:07.793 [2024-11-26 19:47:03.005357] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:13:07.793 [2024-11-26 19:47:03.005368] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:13:07.793 [2024-11-26 19:47:03.005622] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:13:07.793 [2024-11-26 19:47:03.005657] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xad0750 0 00:13:07.793 [2024-11-26 19:47:03.012780] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:13:07.793 [2024-11-26 19:47:03.012798] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:13:07.793 [2024-11-26 19:47:03.012802] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:13:07.793 [2024-11-26 19:47:03.012806] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:13:07.793 [2024-11-26 19:47:03.012839] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:07.793 [2024-11-26 19:47:03.012846] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:07.793 [2024-11-26 19:47:03.012851] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xad0750) 00:13:07.793 [2024-11-26 19:47:03.012867] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:13:07.793 [2024-11-26 19:47:03.012893] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb34740, cid 0, qid 0 00:13:07.793 [2024-11-26 19:47:03.020780] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:07.793 [2024-11-26 19:47:03.020796] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:07.793 [2024-11-26 19:47:03.020799] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:07.793 [2024-11-26 19:47:03.020802] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb34740) on tqpair=0xad0750 00:13:07.793 [2024-11-26 19:47:03.020810] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:13:07.793 [2024-11-26 19:47:03.020816] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:13:07.793 [2024-11-26 19:47:03.020821] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:13:07.793 [2024-11-26 19:47:03.020836] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:07.793 [2024-11-26 19:47:03.020839] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:07.793 [2024-11-26 19:47:03.020841] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xad0750) 00:13:07.793 [2024-11-26 19:47:03.020849] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.793 [2024-11-26 19:47:03.020866] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb34740, cid 0, qid 0 00:13:07.793 [2024-11-26 19:47:03.020921] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:07.793 [2024-11-26 19:47:03.020926] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:07.793 [2024-11-26 19:47:03.020929] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:07.793 [2024-11-26 19:47:03.020931] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb34740) on tqpair=0xad0750 00:13:07.793 [2024-11-26 19:47:03.020935] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:13:07.793 [2024-11-26 19:47:03.020940] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:13:07.793 [2024-11-26 19:47:03.020945] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:07.793 [2024-11-26 19:47:03.020948] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:07.793 [2024-11-26 19:47:03.020950] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xad0750) 00:13:07.793 [2024-11-26 19:47:03.020956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.793 [2024-11-26 19:47:03.020966] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb34740, cid 0, qid 0 00:13:07.793 [2024-11-26 19:47:03.021006] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:07.793 [2024-11-26 19:47:03.021011] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:07.793 [2024-11-26 19:47:03.021013] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:07.793 [2024-11-26 19:47:03.021016] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb34740) on tqpair=0xad0750 00:13:07.793 [2024-11-26 19:47:03.021020] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:13:07.793 [2024-11-26 19:47:03.021025] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:13:07.793 [2024-11-26 19:47:03.021030] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:07.793 [2024-11-26 19:47:03.021033] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:07.793 [2024-11-26 19:47:03.021035] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xad0750) 00:13:07.793 [2024-11-26 19:47:03.021041] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.793 [2024-11-26 19:47:03.021051] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb34740, cid 0, qid 0 00:13:07.793 [2024-11-26 19:47:03.021088] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:07.793 [2024-11-26 19:47:03.021093] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:07.793 [2024-11-26 19:47:03.021095] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:07.793 [2024-11-26 19:47:03.021098] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb34740) on tqpair=0xad0750 00:13:07.793 [2024-11-26 19:47:03.021102] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:07.793 [2024-11-26 19:47:03.021109] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:07.793 [2024-11-26 19:47:03.021111] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:07.793 [2024-11-26 19:47:03.021114] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xad0750) 00:13:07.793 [2024-11-26 19:47:03.021119] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.793 [2024-11-26 19:47:03.021129] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb34740, cid 0, qid 0 00:13:07.793 [2024-11-26 19:47:03.021169] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:07.793 [2024-11-26 19:47:03.021173] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:07.793 [2024-11-26 19:47:03.021176] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:07.793 [2024-11-26 19:47:03.021179] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb34740) on tqpair=0xad0750 00:13:07.793 [2024-11-26 19:47:03.021182] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:13:07.793 [2024-11-26 19:47:03.021185] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:13:07.793 [2024-11-26 19:47:03.021191] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:07.793 [2024-11-26 19:47:03.021295] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:13:07.793 [2024-11-26 19:47:03.021298] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:07.793 [2024-11-26 19:47:03.021305] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:07.793 [2024-11-26 19:47:03.021307] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:07.793 [2024-11-26 19:47:03.021310] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xad0750) 00:13:07.793 [2024-11-26 19:47:03.021315] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.793 [2024-11-26 19:47:03.021325] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb34740, cid 0, qid 0 00:13:07.793 [2024-11-26 19:47:03.021365] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:07.793 [2024-11-26 19:47:03.021420] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:07.793 [2024-11-26 19:47:03.021422] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:07.793 [2024-11-26 19:47:03.021425] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb34740) on tqpair=0xad0750 00:13:07.793 [2024-11-26 19:47:03.021429] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:07.793 [2024-11-26 19:47:03.021436] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:07.793 [2024-11-26 19:47:03.021438] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:07.793 [2024-11-26 19:47:03.021441] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xad0750) 00:13:07.793 [2024-11-26 19:47:03.021446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.793 [2024-11-26 19:47:03.021456] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb34740, cid 0, qid 0 00:13:07.793 [2024-11-26 19:47:03.021489] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:07.793 [2024-11-26 19:47:03.021494] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:07.793 [2024-11-26 19:47:03.021496] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:07.793 [2024-11-26 19:47:03.021499] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb34740) on tqpair=0xad0750 00:13:07.793 [2024-11-26 19:47:03.021502] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:07.793 [2024-11-26 19:47:03.021506] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:13:07.793 [2024-11-26 19:47:03.021511] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:13:07.793 [2024-11-26 19:47:03.021517] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:13:07.793 [2024-11-26 19:47:03.021525] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:07.793 [2024-11-26 19:47:03.021527] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xad0750) 00:13:07.794 [2024-11-26 19:47:03.021533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.794 [2024-11-26 19:47:03.021544] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb34740, cid 0, qid 0 00:13:07.794 [2024-11-26 19:47:03.021605] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:07.794 [2024-11-26 19:47:03.021609] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:07.794 [2024-11-26 19:47:03.021612] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:07.794 [2024-11-26 19:47:03.021615] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xad0750): datao=0, datal=4096, cccid=0 00:13:07.794 [2024-11-26 19:47:03.021618] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb34740) on tqpair(0xad0750): expected_datao=0, payload_size=4096 00:13:07.794 [2024-11-26 19:47:03.021621] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:07.794 [2024-11-26 19:47:03.021627] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:07.794 [2024-11-26 19:47:03.021630] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:07.794 [2024-11-26 19:47:03.021637] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:07.794 [2024-11-26 19:47:03.021642] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:07.794 [2024-11-26 19:47:03.021644] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:07.794 [2024-11-26 19:47:03.021647] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb34740) on tqpair=0xad0750 00:13:07.794 [2024-11-26 19:47:03.021653] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:13:07.794 [2024-11-26 19:47:03.021656] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:13:07.794 [2024-11-26 19:47:03.021659] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:13:07.794 [2024-11-26 19:47:03.021665] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:13:07.794 [2024-11-26 19:47:03.021668] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:13:07.794 [2024-11-26 19:47:03.021671] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:13:07.794 [2024-11-26 19:47:03.021677] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:13:07.794 [2024-11-26 19:47:03.021682] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:07.794 [2024-11-26 19:47:03.021685] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:07.794 [2024-11-26 19:47:03.021688] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xad0750) 00:13:07.794 [2024-11-26 19:47:03.021693] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:07.794 [2024-11-26 19:47:03.021704] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb34740, cid 0, qid 0 00:13:07.794 [2024-11-26 19:47:03.021751] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:07.794 [2024-11-26 19:47:03.021756] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:07.794 [2024-11-26 19:47:03.021758] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:07.794 [2024-11-26 19:47:03.021761] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb34740) on tqpair=0xad0750 00:13:07.794 [2024-11-26 19:47:03.021776] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:07.794 [2024-11-26 19:47:03.021779] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:07.794 [2024-11-26 19:47:03.021782] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xad0750) 00:13:07.794 [2024-11-26 19:47:03.021786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:07.794 [2024-11-26 19:47:03.021791] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:07.794 [2024-11-26 19:47:03.021794] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:07.794 [2024-11-26 19:47:03.021796] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xad0750) 00:13:07.794 [2024-11-26 19:47:03.021801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:07.794 [2024-11-26 19:47:03.021805] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:07.794 [2024-11-26 19:47:03.021808] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:07.794 [2024-11-26 19:47:03.021810] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xad0750) 00:13:07.794 [2024-11-26 19:47:03.021814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:07.794 [2024-11-26 19:47:03.021819] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:07.794 [2024-11-26 19:47:03.021822] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:07.794 [2024-11-26 19:47:03.021824] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad0750) 00:13:07.794 [2024-11-26 19:47:03.021828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:07.794 [2024-11-26 19:47:03.021832] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:13:07.794 [2024-11-26 19:47:03.021837] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:07.794 [2024-11-26 19:47:03.021842] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:07.794 [2024-11-26 19:47:03.021845] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xad0750) 00:13:07.794 [2024-11-26 19:47:03.021850] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.794 [2024-11-26 19:47:03.021865] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb34740, cid 0, qid 0 00:13:07.794 [2024-11-26 19:47:03.021869] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb348c0, cid 1, qid 0 00:13:07.794 [2024-11-26 19:47:03.021873] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb34a40, cid 2, qid 0 00:13:07.794 [2024-11-26 19:47:03.021876] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb34bc0, cid 3, qid 0 00:13:07.794 [2024-11-26 19:47:03.021880] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb34d40, cid 4, qid 0 00:13:07.794 [2024-11-26 19:47:03.021961] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:07.794 [2024-11-26 19:47:03.021965] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:07.794 [2024-11-26 19:47:03.021968] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:07.794 [2024-11-26 19:47:03.021971] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb34d40) on tqpair=0xad0750 00:13:07.794 [2024-11-26 19:47:03.021975] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:13:07.794 [2024-11-26 19:47:03.021978] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:13:07.794 [2024-11-26 19:47:03.021986] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:07.794 [2024-11-26 19:47:03.021988] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xad0750) 00:13:07.794 [2024-11-26 19:47:03.021993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.794 [2024-11-26 19:47:03.022003] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb34d40, cid 4, qid 0 00:13:07.794 [2024-11-26 19:47:03.022048] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:07.794 [2024-11-26 19:47:03.022053] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:07.794 [2024-11-26 19:47:03.022055] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:07.794 [2024-11-26 19:47:03.022058] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xad0750): datao=0, datal=4096, cccid=4 00:13:07.794 [2024-11-26 19:47:03.022061] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb34d40) on tqpair(0xad0750): expected_datao=0, payload_size=4096 00:13:07.794 [2024-11-26 19:47:03.022064] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:07.794 [2024-11-26 19:47:03.022069] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:07.794 [2024-11-26 19:47:03.022072] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:07.794 [2024-11-26 19:47:03.022078] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:07.794 [2024-11-26 19:47:03.022082] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:07.794 [2024-11-26 19:47:03.022085] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:07.794 [2024-11-26 19:47:03.022088] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb34d40) on tqpair=0xad0750 00:13:07.794 [2024-11-26 19:47:03.022097] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:13:07.794 [2024-11-26 19:47:03.022117] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:07.794 [2024-11-26 19:47:03.022120] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xad0750) 00:13:07.794 [2024-11-26 19:47:03.022125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.794 [2024-11-26 19:47:03.022131] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:07.794 [2024-11-26 19:47:03.022134] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:07.794 [2024-11-26 19:47:03.022136] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xad0750) 00:13:07.794 [2024-11-26 19:47:03.022141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:13:07.794 [2024-11-26 19:47:03.022154] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb34d40, cid 4, qid 0 00:13:07.794 [2024-11-26 19:47:03.022158] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb34ec0, cid 5, qid 0 00:13:07.794 [2024-11-26 19:47:03.022246] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:07.794 [2024-11-26 19:47:03.022257] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:07.794 [2024-11-26 19:47:03.022260] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:07.794 [2024-11-26 19:47:03.022263] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xad0750): datao=0, datal=1024, cccid=4 00:13:07.794 [2024-11-26 19:47:03.022266] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb34d40) on tqpair(0xad0750): expected_datao=0, payload_size=1024 00:13:07.794 [2024-11-26 19:47:03.022269] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:07.794 [2024-11-26 19:47:03.022274] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:07.794 [2024-11-26 19:47:03.022276] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:07.794 [2024-11-26 19:47:03.022281] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:07.794 [2024-11-26 19:47:03.022286] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:07.794 [2024-11-26 19:47:03.022288] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:07.794 [2024-11-26 19:47:03.022291] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb34ec0) on tqpair=0xad0750 00:13:07.794 [2024-11-26 19:47:03.022303] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:07.795 [2024-11-26 19:47:03.022308] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:07.795 [2024-11-26 19:47:03.022310] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:07.795 [2024-11-26 19:47:03.022313] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb34d40) on tqpair=0xad0750 00:13:07.795 [2024-11-26 19:47:03.022321] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:07.795 [2024-11-26 19:47:03.022324] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xad0750) 00:13:07.795 [2024-11-26 19:47:03.022329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.795 [2024-11-26 19:47:03.022342] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb34d40, cid 4, qid 0 00:13:07.795 [2024-11-26 19:47:03.022388] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:07.795 [2024-11-26 19:47:03.022393] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:07.795 [2024-11-26 19:47:03.022395] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:07.795 [2024-11-26 19:47:03.022398] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xad0750): datao=0, datal=3072, cccid=4 00:13:07.795 [2024-11-26 19:47:03.022401] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb34d40) on tqpair(0xad0750): expected_datao=0, payload_size=3072 00:13:07.795 [2024-11-26 19:47:03.022404] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:07.795 [2024-11-26 19:47:03.022409] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:07.795 [2024-11-26 19:47:03.022411] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:07.795 [2024-11-26 19:47:03.022417] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:07.795 [2024-11-26 19:47:03.022422] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:07.795 [2024-11-26 19:47:03.022424] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:07.795 [2024-11-26 19:47:03.022427] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb34d40) on tqpair=0xad0750 00:13:07.795 [2024-11-26 19:47:03.022434] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:07.795 [2024-11-26 19:47:03.022436] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xad0750) 00:13:07.795 [2024-11-26 19:47:03.022441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.795 [2024-11-26 19:47:03.022454] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb34d40, cid 4, qid 0 00:13:07.795 [2024-11-26 19:47:03.022497] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:07.795 [2024-11-26 19:47:03.022502] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:07.795 [2024-11-26 19:47:03.022504] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:07.795 [2024-11-26 19:47:03.022507] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xad0750): datao=0, datal=8, cccid=4 00:13:07.795 [2024-11-26 19:47:03.022510] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb34d40) on tqpair(0xad0750): expected_datao=0, payload_size=8 00:13:07.795 [2024-11-26 19:47:03.022513] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:07.795 [2024-11-26 19:47:03.022518] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:07.795 [2024-11-26 19:47:03.022520] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:07.795 [2024-11-26 19:47:03.022530] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:07.795 ===================================================== 00:13:07.795 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:13:07.795 ===================================================== 00:13:07.795 Controller Capabilities/Features 00:13:07.795 ================================ 00:13:07.795 Vendor ID: 0000 00:13:07.795 Subsystem Vendor ID: 0000 00:13:07.795 Serial Number: .................... 00:13:07.795 Model Number: ........................................ 00:13:07.795 Firmware Version: 25.01 00:13:07.795 Recommended Arb Burst: 0 00:13:07.795 IEEE OUI Identifier: 00 00 00 00:13:07.795 Multi-path I/O 00:13:07.795 May have multiple subsystem ports: No 00:13:07.795 May have multiple controllers: No 00:13:07.795 Associated with SR-IOV VF: No 00:13:07.795 Max Data Transfer Size: 131072 00:13:07.795 Max Number of Namespaces: 0 00:13:07.795 Max Number of I/O Queues: 1024 00:13:07.795 NVMe Specification Version (VS): 1.3 00:13:07.795 NVMe Specification Version (Identify): 1.3 00:13:07.795 Maximum Queue Entries: 128 00:13:07.795 Contiguous Queues Required: Yes 00:13:07.795 Arbitration Mechanisms Supported 00:13:07.795 Weighted Round Robin: Not Supported 00:13:07.795 Vendor Specific: Not Supported 00:13:07.795 Reset Timeout: 15000 ms 00:13:07.795 Doorbell Stride: 4 bytes 00:13:07.795 NVM Subsystem Reset: Not Supported 00:13:07.795 Command Sets Supported 00:13:07.795 NVM Command Set: Supported 00:13:07.795 Boot Partition: Not Supported 00:13:07.795 Memory Page Size Minimum: 4096 bytes 00:13:07.795 Memory Page Size Maximum: 4096 bytes 00:13:07.795 Persistent Memory Region: Not Supported 00:13:07.795 Optional Asynchronous Events Supported 00:13:07.795 Namespace Attribute Notices: Not Supported 00:13:07.795 Firmware Activation Notices: Not Supported 00:13:07.795 ANA Change Notices: Not Supported 00:13:07.795 PLE Aggregate Log Change Notices: Not Supported 00:13:07.795 LBA Status Info Alert Notices: Not Supported 00:13:07.795 EGE Aggregate Log Change Notices: Not Supported 00:13:07.795 Normal NVM Subsystem Shutdown event: Not Supported 00:13:07.795 Zone Descriptor Change Notices: Not Supported 00:13:07.795 Discovery Log Change Notices: Supported 00:13:07.795 Controller Attributes 00:13:07.795 128-bit Host Identifier: Not Supported 00:13:07.795 Non-Operational Permissive Mode: Not Supported 00:13:07.795 NVM Sets: Not Supported 00:13:07.795 Read Recovery Levels: Not Supported 00:13:07.795 Endurance Groups: Not Supported 00:13:07.795 Predictable Latency Mode: Not Supported 00:13:07.795 Traffic Based Keep ALive: Not Supported 00:13:07.795 Namespace Granularity: Not Supported 00:13:07.795 SQ Associations: Not Supported 00:13:07.795 UUID List: Not Supported 00:13:07.795 Multi-Domain Subsystem: Not Supported 00:13:07.795 Fixed Capacity Management: Not Supported 00:13:07.795 Variable Capacity Management: Not Supported 00:13:07.795 Delete Endurance Group: Not Supported 00:13:07.795 Delete NVM Set: Not Supported 00:13:07.795 Extended LBA Formats Supported: Not Supported 00:13:07.795 Flexible Data Placement Supported: Not Supported 00:13:07.795 00:13:07.795 Controller Memory Buffer Support 00:13:07.795 ================================ 00:13:07.795 Supported: No 00:13:07.795 00:13:07.795 Persistent Memory Region Support 00:13:07.795 ================================ 00:13:07.795 Supported: No 00:13:07.795 00:13:07.795 Admin Command Set Attributes 00:13:07.795 ============================ 00:13:07.795 Security Send/Receive: Not Supported 00:13:07.795 Format NVM: Not Supported 00:13:07.795 Firmware Activate/Download: Not Supported 00:13:07.795 Namespace Management: Not Supported 00:13:07.795 Device Self-Test: Not Supported 00:13:07.795 Directives: Not Supported 00:13:07.795 NVMe-MI: Not Supported 00:13:07.795 Virtualization Management: Not Supported 00:13:07.795 Doorbell Buffer Config: Not Supported 00:13:07.795 Get LBA Status Capability: Not Supported 00:13:07.795 Command & Feature Lockdown Capability: Not Supported 00:13:07.795 Abort Command Limit: 1 00:13:07.795 Async Event Request Limit: 4 00:13:07.795 Number of Firmware Slots: N/A 00:13:07.795 Firmware Slot 1 Read-Only: N/A 00:13:07.795 Firm[2024-11-26 19:47:03.022534] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:07.795 [2024-11-26 19:47:03.022537] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:07.795 [2024-11-26 19:47:03.022540] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb34d40) on tqpair=0xad0750 00:13:07.795 ware Activation Without Reset: N/A 00:13:07.795 Multiple Update Detection Support: N/A 00:13:07.795 Firmware Update Granularity: No Information Provided 00:13:07.795 Per-Namespace SMART Log: No 00:13:07.795 Asymmetric Namespace Access Log Page: Not Supported 00:13:07.795 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:13:07.795 Command Effects Log Page: Not Supported 00:13:07.795 Get Log Page Extended Data: Supported 00:13:07.795 Telemetry Log Pages: Not Supported 00:13:07.795 Persistent Event Log Pages: Not Supported 00:13:07.795 Supported Log Pages Log Page: May Support 00:13:07.795 Commands Supported & Effects Log Page: Not Supported 00:13:07.795 Feature Identifiers & Effects Log Page:May Support 00:13:07.795 NVMe-MI Commands & Effects Log Page: May Support 00:13:07.795 Data Area 4 for Telemetry Log: Not Supported 00:13:07.795 Error Log Page Entries Supported: 128 00:13:07.795 Keep Alive: Not Supported 00:13:07.795 00:13:07.795 NVM Command Set Attributes 00:13:07.795 ========================== 00:13:07.795 Submission Queue Entry Size 00:13:07.795 Max: 1 00:13:07.795 Min: 1 00:13:07.795 Completion Queue Entry Size 00:13:07.795 Max: 1 00:13:07.795 Min: 1 00:13:07.795 Number of Namespaces: 0 00:13:07.795 Compare Command: Not Supported 00:13:07.795 Write Uncorrectable Command: Not Supported 00:13:07.795 Dataset Management Command: Not Supported 00:13:07.795 Write Zeroes Command: Not Supported 00:13:07.795 Set Features Save Field: Not Supported 00:13:07.795 Reservations: Not Supported 00:13:07.795 Timestamp: Not Supported 00:13:07.795 Copy: Not Supported 00:13:07.795 Volatile Write Cache: Not Present 00:13:07.795 Atomic Write Unit (Normal): 1 00:13:07.795 Atomic Write Unit (PFail): 1 00:13:07.795 Atomic Compare & Write Unit: 1 00:13:07.795 Fused Compare & Write: Supported 00:13:07.795 Scatter-Gather List 00:13:07.795 SGL Command Set: Supported 00:13:07.796 SGL Keyed: Supported 00:13:07.796 SGL Bit Bucket Descriptor: Not Supported 00:13:07.796 SGL Metadata Pointer: Not Supported 00:13:07.796 Oversized SGL: Not Supported 00:13:07.796 SGL Metadata Address: Not Supported 00:13:07.796 SGL Offset: Supported 00:13:07.796 Transport SGL Data Block: Not Supported 00:13:07.796 Replay Protected Memory Block: Not Supported 00:13:07.796 00:13:07.796 Firmware Slot Information 00:13:07.796 ========================= 00:13:07.796 Active slot: 0 00:13:07.796 00:13:07.796 00:13:07.796 Error Log 00:13:07.796 ========= 00:13:07.796 00:13:07.796 Active Namespaces 00:13:07.796 ================= 00:13:07.796 Discovery Log Page 00:13:07.796 ================== 00:13:07.796 Generation Counter: 2 00:13:07.796 Number of Records: 2 00:13:07.796 Record Format: 0 00:13:07.796 00:13:07.796 Discovery Log Entry 0 00:13:07.796 ---------------------- 00:13:07.796 Transport Type: 3 (TCP) 00:13:07.796 Address Family: 1 (IPv4) 00:13:07.796 Subsystem Type: 3 (Current Discovery Subsystem) 00:13:07.796 Entry Flags: 00:13:07.796 Duplicate Returned Information: 1 00:13:07.796 Explicit Persistent Connection Support for Discovery: 1 00:13:07.796 Transport Requirements: 00:13:07.796 Secure Channel: Not Required 00:13:07.796 Port ID: 0 (0x0000) 00:13:07.796 Controller ID: 65535 (0xffff) 00:13:07.796 Admin Max SQ Size: 128 00:13:07.796 Transport Service Identifier: 4420 00:13:07.796 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:13:07.796 Transport Address: 10.0.0.3 00:13:07.796 Discovery Log Entry 1 00:13:07.796 ---------------------- 00:13:07.796 Transport Type: 3 (TCP) 00:13:07.796 Address Family: 1 (IPv4) 00:13:07.796 Subsystem Type: 2 (NVM Subsystem) 00:13:07.796 Entry Flags: 00:13:07.796 Duplicate Returned Information: 0 00:13:07.796 Explicit Persistent Connection Support for Discovery: 0 00:13:07.796 Transport Requirements: 00:13:07.796 Secure Channel: Not Required 00:13:07.796 Port ID: 0 (0x0000) 00:13:07.796 Controller ID: 65535 (0xffff) 00:13:07.796 Admin Max SQ Size: 128 00:13:07.796 Transport Service Identifier: 4420 00:13:07.796 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:13:07.796 Transport Address: 10.0.0.3 [2024-11-26 19:47:03.022611] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:13:07.796 [2024-11-26 19:47:03.022619] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb34740) on tqpair=0xad0750 00:13:07.796 [2024-11-26 19:47:03.022624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.796 [2024-11-26 19:47:03.022628] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb348c0) on tqpair=0xad0750 00:13:07.796 [2024-11-26 19:47:03.022631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.796 [2024-11-26 19:47:03.022635] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb34a40) on tqpair=0xad0750 00:13:07.796 [2024-11-26 19:47:03.022638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.796 [2024-11-26 19:47:03.022642] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb34bc0) on tqpair=0xad0750 00:13:07.796 [2024-11-26 19:47:03.022645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.796 [2024-11-26 19:47:03.022653] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:07.796 [2024-11-26 19:47:03.022655] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:07.796 [2024-11-26 19:47:03.022658] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad0750) 00:13:07.796 [2024-11-26 19:47:03.022663] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.796 [2024-11-26 19:47:03.022676] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb34bc0, cid 3, qid 0 00:13:07.796 [2024-11-26 19:47:03.022711] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:07.796 [2024-11-26 19:47:03.022716] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:07.796 [2024-11-26 19:47:03.022718] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:07.796 [2024-11-26 19:47:03.022721] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb34bc0) on tqpair=0xad0750 00:13:07.796 [2024-11-26 19:47:03.022726] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:07.796 [2024-11-26 19:47:03.022729] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:07.796 [2024-11-26 19:47:03.022731] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad0750) 00:13:07.796 [2024-11-26 19:47:03.022736] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.796 [2024-11-26 19:47:03.022749] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb34bc0, cid 3, qid 0 00:13:07.796 [2024-11-26 19:47:03.022812] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:07.796 [2024-11-26 19:47:03.022818] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:07.796 [2024-11-26 19:47:03.022820] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:07.796 [2024-11-26 19:47:03.022823] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb34bc0) on tqpair=0xad0750 00:13:07.796 [2024-11-26 19:47:03.022826] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:13:07.796 [2024-11-26 19:47:03.022829] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:13:07.796 [2024-11-26 19:47:03.022836] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:07.796 [2024-11-26 19:47:03.022839] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:07.796 [2024-11-26 19:47:03.022841] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad0750) 00:13:07.796 [2024-11-26 19:47:03.022847] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.796 [2024-11-26 19:47:03.022857] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb34bc0, cid 3, qid 0 00:13:07.796 [2024-11-26 19:47:03.022891] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:07.796 [2024-11-26 19:47:03.022896] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:07.796 [2024-11-26 19:47:03.022898] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:07.796 [2024-11-26 19:47:03.022901] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb34bc0) on tqpair=0xad0750 00:13:07.796 [2024-11-26 19:47:03.022908] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:07.796 [2024-11-26 19:47:03.022911] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:07.796 [2024-11-26 19:47:03.022913] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad0750) 00:13:07.796 [2024-11-26 19:47:03.022919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.796 [2024-11-26 19:47:03.022928] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb34bc0, cid 3, qid 0 00:13:07.796 [2024-11-26 19:47:03.022982] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:07.796 [2024-11-26 19:47:03.022988] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:07.796 [2024-11-26 19:47:03.022990] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:07.796 [2024-11-26 19:47:03.022993] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb34bc0) on tqpair=0xad0750 00:13:07.796 [2024-11-26 19:47:03.023001] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:07.796 [2024-11-26 19:47:03.023003] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:07.796 [2024-11-26 19:47:03.023006] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad0750) 00:13:07.796 [2024-11-26 19:47:03.023011] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.796 [2024-11-26 19:47:03.023022] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb34bc0, cid 3, qid 0 00:13:07.796 [2024-11-26 19:47:03.023055] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:07.796 [2024-11-26 19:47:03.023060] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:07.796 [2024-11-26 19:47:03.023062] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:07.796 [2024-11-26 19:47:03.023065] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb34bc0) on tqpair=0xad0750 00:13:07.796 [2024-11-26 19:47:03.023072] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:07.796 [2024-11-26 19:47:03.023075] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:07.796 [2024-11-26 19:47:03.023077] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad0750) 00:13:07.796 [2024-11-26 19:47:03.023083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.796 [2024-11-26 19:47:03.023092] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb34bc0, cid 3, qid 0 00:13:07.796 [2024-11-26 19:47:03.023126] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:07.796 [2024-11-26 19:47:03.023131] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:07.796 [2024-11-26 19:47:03.023133] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:07.796 [2024-11-26 19:47:03.023136] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb34bc0) on tqpair=0xad0750 00:13:07.796 [2024-11-26 19:47:03.023143] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:07.796 [2024-11-26 19:47:03.023146] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:07.796 [2024-11-26 19:47:03.023148] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad0750) 00:13:07.796 [2024-11-26 19:47:03.023153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.796 [2024-11-26 19:47:03.023163] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb34bc0, cid 3, qid 0 00:13:07.796 [2024-11-26 19:47:03.023197] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:07.796 [2024-11-26 19:47:03.023202] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:07.796 [2024-11-26 19:47:03.023204] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:07.796 [2024-11-26 19:47:03.023207] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb34bc0) on tqpair=0xad0750 00:13:07.796 [2024-11-26 19:47:03.023214] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:07.796 [2024-11-26 19:47:03.023217] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:07.797 [2024-11-26 19:47:03.023219] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad0750) 00:13:07.797 [2024-11-26 19:47:03.023225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.797 [2024-11-26 19:47:03.023234] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb34bc0, cid 3, qid 0 00:13:07.797 [2024-11-26 19:47:03.023271] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:07.797 [2024-11-26 19:47:03.023276] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:07.797 [2024-11-26 19:47:03.023278] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:07.797 [2024-11-26 19:47:03.023281] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb34bc0) on tqpair=0xad0750 00:13:07.797 [2024-11-26 19:47:03.023288] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:07.797 [2024-11-26 19:47:03.023291] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:07.797 [2024-11-26 19:47:03.023294] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad0750) 00:13:07.797 [2024-11-26 19:47:03.023299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.797 [2024-11-26 19:47:03.023309] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb34bc0, cid 3, qid 0 00:13:07.797 [2024-11-26 19:47:03.023340] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:07.797 [2024-11-26 19:47:03.023345] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:07.797 [2024-11-26 19:47:03.023348] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:07.797 [2024-11-26 19:47:03.023350] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb34bc0) on tqpair=0xad0750 00:13:07.797 [2024-11-26 19:47:03.023358] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:07.797 [2024-11-26 19:47:03.023360] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:07.797 [2024-11-26 19:47:03.023363] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad0750) 00:13:07.797 [2024-11-26 19:47:03.023368] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.797 [2024-11-26 19:47:03.023378] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb34bc0, cid 3, qid 0 00:13:07.797 [2024-11-26 19:47:03.023414] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:07.797 [2024-11-26 19:47:03.023419] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:07.797 [2024-11-26 19:47:03.023421] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:07.797 [2024-11-26 19:47:03.023424] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb34bc0) on tqpair=0xad0750 00:13:07.797 [2024-11-26 19:47:03.023431] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:07.797 [2024-11-26 19:47:03.023434] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:07.797 [2024-11-26 19:47:03.023436] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad0750) 00:13:07.797 [2024-11-26 19:47:03.023441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.797 [2024-11-26 19:47:03.023451] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb34bc0, cid 3, qid 0 00:13:07.797 [2024-11-26 19:47:03.023489] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:07.797 [2024-11-26 19:47:03.023499] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:07.797 [2024-11-26 19:47:03.023501] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:07.797 [2024-11-26 19:47:03.023504] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb34bc0) on tqpair=0xad0750 00:13:07.797 [2024-11-26 19:47:03.023511] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:07.797 [2024-11-26 19:47:03.023514] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:07.797 [2024-11-26 19:47:03.023517] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad0750) 00:13:07.797 [2024-11-26 19:47:03.023522] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.797 [2024-11-26 19:47:03.023532] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb34bc0, cid 3, qid 0 00:13:07.797 [2024-11-26 19:47:03.023568] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:07.797 [2024-11-26 19:47:03.023573] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:07.797 [2024-11-26 19:47:03.023576] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:07.797 [2024-11-26 19:47:03.023578] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb34bc0) on tqpair=0xad0750 00:13:07.797 [2024-11-26 19:47:03.023586] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:07.797 [2024-11-26 19:47:03.023588] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:07.797 [2024-11-26 19:47:03.023591] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad0750) 00:13:07.797 [2024-11-26 19:47:03.023596] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.797 [2024-11-26 19:47:03.023606] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb34bc0, cid 3, qid 0 00:13:07.797 [2024-11-26 19:47:03.023642] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:07.797 [2024-11-26 19:47:03.023651] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:07.797 [2024-11-26 19:47:03.023654] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:07.797 [2024-11-26 19:47:03.023656] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb34bc0) on tqpair=0xad0750 00:13:07.797 [2024-11-26 19:47:03.023664] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:07.797 [2024-11-26 19:47:03.023667] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:07.797 [2024-11-26 19:47:03.023669] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad0750) 00:13:07.797 [2024-11-26 19:47:03.023674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.797 [2024-11-26 19:47:03.023684] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb34bc0, cid 3, qid 0 00:13:07.797 [2024-11-26 19:47:03.023718] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:07.797 [2024-11-26 19:47:03.023723] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:07.797 [2024-11-26 19:47:03.023725] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:07.797 [2024-11-26 19:47:03.023728] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb34bc0) on tqpair=0xad0750 00:13:07.797 [2024-11-26 19:47:03.023735] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:07.797 [2024-11-26 19:47:03.023738] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:07.797 [2024-11-26 19:47:03.023740] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad0750) 00:13:07.797 [2024-11-26 19:47:03.023746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.797 [2024-11-26 19:47:03.023756] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb34bc0, cid 3, qid 0 00:13:07.797 [2024-11-26 19:47:03.023812] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:07.797 [2024-11-26 19:47:03.023821] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:07.797 [2024-11-26 19:47:03.023823] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:07.797 [2024-11-26 19:47:03.023826] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb34bc0) on tqpair=0xad0750 00:13:07.797 [2024-11-26 19:47:03.023834] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:07.797 [2024-11-26 19:47:03.023837] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:07.797 [2024-11-26 19:47:03.023839] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad0750) 00:13:07.797 [2024-11-26 19:47:03.023844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.797 [2024-11-26 19:47:03.023855] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb34bc0, cid 3, qid 0 00:13:07.797 [2024-11-26 19:47:03.023894] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:07.797 [2024-11-26 19:47:03.023899] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:07.797 [2024-11-26 19:47:03.023901] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:07.797 [2024-11-26 19:47:03.023904] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb34bc0) on tqpair=0xad0750 00:13:07.797 [2024-11-26 19:47:03.023911] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:07.797 [2024-11-26 19:47:03.023914] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:07.797 [2024-11-26 19:47:03.023916] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad0750) 00:13:07.797 [2024-11-26 19:47:03.023922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.797 [2024-11-26 19:47:03.023931] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb34bc0, cid 3, qid 0 00:13:07.797 [2024-11-26 19:47:03.023973] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:07.797 [2024-11-26 19:47:03.023981] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:07.797 [2024-11-26 19:47:03.023984] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:07.797 [2024-11-26 19:47:03.023987] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb34bc0) on tqpair=0xad0750 00:13:07.797 [2024-11-26 19:47:03.023994] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:07.797 [2024-11-26 19:47:03.023997] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:07.797 [2024-11-26 19:47:03.024000] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad0750) 00:13:07.798 [2024-11-26 19:47:03.024005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.798 [2024-11-26 19:47:03.024015] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb34bc0, cid 3, qid 0 00:13:07.798 [2024-11-26 19:47:03.024061] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:07.798 [2024-11-26 19:47:03.024070] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:07.798 [2024-11-26 19:47:03.024072] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:07.798 [2024-11-26 19:47:03.024075] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb34bc0) on tqpair=0xad0750 00:13:07.798 [2024-11-26 19:47:03.024083] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:07.798 [2024-11-26 19:47:03.024085] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:07.798 [2024-11-26 19:47:03.024088] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad0750) 00:13:07.798 [2024-11-26 19:47:03.024093] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.798 [2024-11-26 19:47:03.024104] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb34bc0, cid 3, qid 0 00:13:07.798 [2024-11-26 19:47:03.024137] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:07.798 [2024-11-26 19:47:03.024142] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:07.798 [2024-11-26 19:47:03.024144] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:07.798 [2024-11-26 19:47:03.024147] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb34bc0) on tqpair=0xad0750 00:13:07.798 [2024-11-26 19:47:03.024155] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:07.798 [2024-11-26 19:47:03.024157] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:07.798 [2024-11-26 19:47:03.024160] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad0750) 00:13:07.798 [2024-11-26 19:47:03.024165] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.798 [2024-11-26 19:47:03.024175] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb34bc0, cid 3, qid 0 00:13:07.798 [2024-11-26 19:47:03.024213] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:07.798 [2024-11-26 19:47:03.024218] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:07.798 [2024-11-26 19:47:03.024220] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:07.798 [2024-11-26 19:47:03.024223] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb34bc0) on tqpair=0xad0750 00:13:07.798 [2024-11-26 19:47:03.024231] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:07.798 [2024-11-26 19:47:03.024233] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:07.798 [2024-11-26 19:47:03.024236] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad0750) 00:13:07.798 [2024-11-26 19:47:03.024241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.798 [2024-11-26 19:47:03.024250] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb34bc0, cid 3, qid 0 00:13:07.798 [2024-11-26 19:47:03.024284] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:07.798 [2024-11-26 19:47:03.024289] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:07.798 [2024-11-26 19:47:03.024291] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:07.798 [2024-11-26 19:47:03.024294] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb34bc0) on tqpair=0xad0750 00:13:07.798 [2024-11-26 19:47:03.024301] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:07.798 [2024-11-26 19:47:03.024304] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:07.798 [2024-11-26 19:47:03.024306] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad0750) 00:13:07.798 [2024-11-26 19:47:03.024312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.798 [2024-11-26 19:47:03.024321] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb34bc0, cid 3, qid 0 00:13:07.798 [2024-11-26 19:47:03.024357] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:07.798 [2024-11-26 19:47:03.024362] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:07.798 [2024-11-26 19:47:03.024364] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:07.798 [2024-11-26 19:47:03.024367] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb34bc0) on tqpair=0xad0750 00:13:07.798 [2024-11-26 19:47:03.024374] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:07.798 [2024-11-26 19:47:03.024377] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:07.798 [2024-11-26 19:47:03.024379] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad0750) 00:13:07.798 [2024-11-26 19:47:03.024384] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.798 [2024-11-26 19:47:03.024394] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb34bc0, cid 3, qid 0 00:13:07.798 [2024-11-26 19:47:03.024430] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:07.798 [2024-11-26 19:47:03.024435] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:07.798 [2024-11-26 19:47:03.024437] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:07.798 [2024-11-26 19:47:03.024440] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb34bc0) on tqpair=0xad0750 00:13:07.798 [2024-11-26 19:47:03.024447] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:07.798 [2024-11-26 19:47:03.024450] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:07.798 [2024-11-26 19:47:03.024452] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad0750) 00:13:07.798 [2024-11-26 19:47:03.024457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.798 [2024-11-26 19:47:03.024467] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb34bc0, cid 3, qid 0 00:13:07.798 [2024-11-26 19:47:03.024501] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:07.798 [2024-11-26 19:47:03.024510] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:07.798 [2024-11-26 19:47:03.024512] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:07.798 [2024-11-26 19:47:03.024515] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb34bc0) on tqpair=0xad0750 00:13:07.798 [2024-11-26 19:47:03.024522] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:07.798 [2024-11-26 19:47:03.024525] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:07.798 [2024-11-26 19:47:03.024528] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad0750) 00:13:07.798 [2024-11-26 19:47:03.024533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.798 [2024-11-26 19:47:03.024543] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb34bc0, cid 3, qid 0 00:13:07.798 [2024-11-26 19:47:03.024579] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:07.798 [2024-11-26 19:47:03.024585] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:07.798 [2024-11-26 19:47:03.024587] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:07.798 [2024-11-26 19:47:03.024590] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb34bc0) on tqpair=0xad0750 00:13:07.798 [2024-11-26 19:47:03.024597] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:07.798 [2024-11-26 19:47:03.024600] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:07.798 [2024-11-26 19:47:03.024602] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad0750) 00:13:07.798 [2024-11-26 19:47:03.024608] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.798 [2024-11-26 19:47:03.024618] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb34bc0, cid 3, qid 0 00:13:07.798 [2024-11-26 19:47:03.024651] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:07.798 [2024-11-26 19:47:03.024660] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:07.798 [2024-11-26 19:47:03.024662] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:07.798 [2024-11-26 19:47:03.024665] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb34bc0) on tqpair=0xad0750 00:13:07.798 [2024-11-26 19:47:03.024673] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:07.798 [2024-11-26 19:47:03.024675] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:07.798 [2024-11-26 19:47:03.024678] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad0750) 00:13:07.798 [2024-11-26 19:47:03.024683] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.798 [2024-11-26 19:47:03.024694] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb34bc0, cid 3, qid 0 00:13:07.798 [2024-11-26 19:47:03.024730] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:07.798 [2024-11-26 19:47:03.024738] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:07.798 [2024-11-26 19:47:03.024741] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:07.798 [2024-11-26 19:47:03.024744] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb34bc0) on tqpair=0xad0750 00:13:07.798 [2024-11-26 19:47:03.024751] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:07.798 [2024-11-26 19:47:03.024754] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:07.798 [2024-11-26 19:47:03.024756] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xad0750) 00:13:07.798 [2024-11-26 19:47:03.024762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:07.798 [2024-11-26 19:47:03.028806] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb34bc0, cid 3, qid 0 00:13:07.798 [2024-11-26 19:47:03.028850] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:07.798 [2024-11-26 19:47:03.028855] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:07.798 [2024-11-26 19:47:03.028858] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:07.798 [2024-11-26 19:47:03.028861] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb34bc0) on tqpair=0xad0750 00:13:07.798 [2024-11-26 19:47:03.028867] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:13:08.060 00:13:08.060 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:13:08.060 [2024-11-26 19:47:03.059814] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:13:08.060 [2024-11-26 19:47:03.059851] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72728 ] 00:13:08.060 [2024-11-26 19:47:03.211215] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:13:08.060 [2024-11-26 19:47:03.211275] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:13:08.060 [2024-11-26 19:47:03.211279] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:13:08.060 [2024-11-26 19:47:03.211291] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:13:08.060 [2024-11-26 19:47:03.211301] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:13:08.060 [2024-11-26 19:47:03.211537] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:13:08.060 [2024-11-26 19:47:03.211578] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x12e8750 0 00:13:08.060 [2024-11-26 19:47:03.217782] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:13:08.060 [2024-11-26 19:47:03.217800] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:13:08.060 [2024-11-26 19:47:03.217804] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:13:08.060 [2024-11-26 19:47:03.217806] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:13:08.060 [2024-11-26 19:47:03.217832] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:08.060 [2024-11-26 19:47:03.217836] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:08.060 [2024-11-26 19:47:03.217839] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12e8750) 00:13:08.060 [2024-11-26 19:47:03.217850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:13:08.060 [2024-11-26 19:47:03.217871] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x134c740, cid 0, qid 0 00:13:08.060 [2024-11-26 19:47:03.225779] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:08.060 [2024-11-26 19:47:03.225795] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:08.060 [2024-11-26 19:47:03.225799] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:08.060 [2024-11-26 19:47:03.225803] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x134c740) on tqpair=0x12e8750 00:13:08.060 [2024-11-26 19:47:03.225813] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:13:08.060 [2024-11-26 19:47:03.225820] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:13:08.060 [2024-11-26 19:47:03.225825] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:13:08.060 [2024-11-26 19:47:03.225838] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:08.060 [2024-11-26 19:47:03.225841] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:08.060 [2024-11-26 19:47:03.225844] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12e8750) 00:13:08.060 [2024-11-26 19:47:03.225852] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:08.060 [2024-11-26 19:47:03.225869] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x134c740, cid 0, qid 0 00:13:08.060 [2024-11-26 19:47:03.225918] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:08.060 [2024-11-26 19:47:03.225923] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:08.060 [2024-11-26 19:47:03.225926] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:08.060 [2024-11-26 19:47:03.225929] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x134c740) on tqpair=0x12e8750 00:13:08.060 [2024-11-26 19:47:03.225934] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:13:08.060 [2024-11-26 19:47:03.225940] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:13:08.060 [2024-11-26 19:47:03.225945] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:08.060 [2024-11-26 19:47:03.225948] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:08.060 [2024-11-26 19:47:03.225951] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12e8750) 00:13:08.060 [2024-11-26 19:47:03.225957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:08.060 [2024-11-26 19:47:03.225968] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x134c740, cid 0, qid 0 00:13:08.060 [2024-11-26 19:47:03.226015] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:08.060 [2024-11-26 19:47:03.226021] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:08.060 [2024-11-26 19:47:03.226023] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:08.060 [2024-11-26 19:47:03.226027] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x134c740) on tqpair=0x12e8750 00:13:08.060 [2024-11-26 19:47:03.226031] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:13:08.060 [2024-11-26 19:47:03.226038] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:13:08.060 [2024-11-26 19:47:03.226043] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:08.060 [2024-11-26 19:47:03.226046] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:08.060 [2024-11-26 19:47:03.226049] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12e8750) 00:13:08.060 [2024-11-26 19:47:03.226056] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:08.060 [2024-11-26 19:47:03.226067] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x134c740, cid 0, qid 0 00:13:08.060 [2024-11-26 19:47:03.226102] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:08.060 [2024-11-26 19:47:03.226106] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:08.060 [2024-11-26 19:47:03.226109] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:08.060 [2024-11-26 19:47:03.226111] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x134c740) on tqpair=0x12e8750 00:13:08.060 [2024-11-26 19:47:03.226115] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:08.060 [2024-11-26 19:47:03.226122] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:08.060 [2024-11-26 19:47:03.226125] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:08.060 [2024-11-26 19:47:03.226127] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12e8750) 00:13:08.060 [2024-11-26 19:47:03.226132] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:08.060 [2024-11-26 19:47:03.226142] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x134c740, cid 0, qid 0 00:13:08.060 [2024-11-26 19:47:03.226181] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:08.060 [2024-11-26 19:47:03.226187] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:08.060 [2024-11-26 19:47:03.226189] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:08.060 [2024-11-26 19:47:03.226192] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x134c740) on tqpair=0x12e8750 00:13:08.060 [2024-11-26 19:47:03.226196] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:13:08.060 [2024-11-26 19:47:03.226199] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:13:08.060 [2024-11-26 19:47:03.226205] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:08.060 [2024-11-26 19:47:03.226309] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:13:08.061 [2024-11-26 19:47:03.226318] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:08.061 [2024-11-26 19:47:03.226324] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:08.061 [2024-11-26 19:47:03.226327] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:08.061 [2024-11-26 19:47:03.226330] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12e8750) 00:13:08.061 [2024-11-26 19:47:03.226335] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:08.061 [2024-11-26 19:47:03.226346] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x134c740, cid 0, qid 0 00:13:08.061 [2024-11-26 19:47:03.226382] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:08.061 [2024-11-26 19:47:03.226387] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:08.061 [2024-11-26 19:47:03.226390] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:08.061 [2024-11-26 19:47:03.226392] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x134c740) on tqpair=0x12e8750 00:13:08.061 [2024-11-26 19:47:03.226396] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:08.061 [2024-11-26 19:47:03.226403] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:08.061 [2024-11-26 19:47:03.226406] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:08.061 [2024-11-26 19:47:03.226409] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12e8750) 00:13:08.061 [2024-11-26 19:47:03.226414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:08.061 [2024-11-26 19:47:03.226424] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x134c740, cid 0, qid 0 00:13:08.061 [2024-11-26 19:47:03.226469] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:08.061 [2024-11-26 19:47:03.226474] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:08.061 [2024-11-26 19:47:03.226477] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:08.061 [2024-11-26 19:47:03.226479] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x134c740) on tqpair=0x12e8750 00:13:08.061 [2024-11-26 19:47:03.226483] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:08.061 [2024-11-26 19:47:03.226486] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:13:08.061 [2024-11-26 19:47:03.226491] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:13:08.061 [2024-11-26 19:47:03.226498] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:13:08.061 [2024-11-26 19:47:03.226505] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:08.061 [2024-11-26 19:47:03.226508] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12e8750) 00:13:08.061 [2024-11-26 19:47:03.226514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:08.061 [2024-11-26 19:47:03.226524] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x134c740, cid 0, qid 0 00:13:08.061 [2024-11-26 19:47:03.226616] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:08.061 [2024-11-26 19:47:03.226630] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:08.061 [2024-11-26 19:47:03.226633] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:08.061 [2024-11-26 19:47:03.226636] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12e8750): datao=0, datal=4096, cccid=0 00:13:08.061 [2024-11-26 19:47:03.226639] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x134c740) on tqpair(0x12e8750): expected_datao=0, payload_size=4096 00:13:08.061 [2024-11-26 19:47:03.226642] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:08.061 [2024-11-26 19:47:03.226648] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:08.061 [2024-11-26 19:47:03.226651] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:08.061 [2024-11-26 19:47:03.226658] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:08.061 [2024-11-26 19:47:03.226663] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:08.061 [2024-11-26 19:47:03.226665] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:08.061 [2024-11-26 19:47:03.226668] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x134c740) on tqpair=0x12e8750 00:13:08.061 [2024-11-26 19:47:03.226674] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:13:08.061 [2024-11-26 19:47:03.226677] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:13:08.061 [2024-11-26 19:47:03.226680] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:13:08.061 [2024-11-26 19:47:03.226686] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:13:08.061 [2024-11-26 19:47:03.226689] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:13:08.061 [2024-11-26 19:47:03.226692] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:13:08.061 [2024-11-26 19:47:03.226698] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:13:08.061 [2024-11-26 19:47:03.226703] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:08.061 [2024-11-26 19:47:03.226706] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:08.061 [2024-11-26 19:47:03.226708] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12e8750) 00:13:08.061 [2024-11-26 19:47:03.226713] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:08.061 [2024-11-26 19:47:03.226725] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x134c740, cid 0, qid 0 00:13:08.061 [2024-11-26 19:47:03.226764] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:08.061 [2024-11-26 19:47:03.226785] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:08.061 [2024-11-26 19:47:03.226788] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:08.061 [2024-11-26 19:47:03.226790] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x134c740) on tqpair=0x12e8750 00:13:08.061 [2024-11-26 19:47:03.226796] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:08.061 [2024-11-26 19:47:03.226799] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:08.061 [2024-11-26 19:47:03.226801] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12e8750) 00:13:08.061 [2024-11-26 19:47:03.226806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:08.061 [2024-11-26 19:47:03.226811] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:08.061 [2024-11-26 19:47:03.226814] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:08.061 [2024-11-26 19:47:03.226817] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x12e8750) 00:13:08.061 [2024-11-26 19:47:03.226821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:08.061 [2024-11-26 19:47:03.226826] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:08.061 [2024-11-26 19:47:03.226829] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:08.061 [2024-11-26 19:47:03.226831] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x12e8750) 00:13:08.061 [2024-11-26 19:47:03.226836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:08.061 [2024-11-26 19:47:03.226841] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:08.061 [2024-11-26 19:47:03.226843] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:08.061 [2024-11-26 19:47:03.226846] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12e8750) 00:13:08.061 [2024-11-26 19:47:03.226850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:08.061 [2024-11-26 19:47:03.226853] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:13:08.061 [2024-11-26 19:47:03.226859] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:08.061 [2024-11-26 19:47:03.226864] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:08.061 [2024-11-26 19:47:03.226866] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12e8750) 00:13:08.061 [2024-11-26 19:47:03.226872] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:08.061 [2024-11-26 19:47:03.226887] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x134c740, cid 0, qid 0 00:13:08.061 [2024-11-26 19:47:03.226891] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x134c8c0, cid 1, qid 0 00:13:08.062 [2024-11-26 19:47:03.226895] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x134ca40, cid 2, qid 0 00:13:08.062 [2024-11-26 19:47:03.226898] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x134cbc0, cid 3, qid 0 00:13:08.062 [2024-11-26 19:47:03.226902] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x134cd40, cid 4, qid 0 00:13:08.062 [2024-11-26 19:47:03.227003] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:08.062 [2024-11-26 19:47:03.227008] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:08.062 [2024-11-26 19:47:03.227011] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:08.062 [2024-11-26 19:47:03.227013] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x134cd40) on tqpair=0x12e8750 00:13:08.062 [2024-11-26 19:47:03.227017] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:13:08.062 [2024-11-26 19:47:03.227021] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:08.062 [2024-11-26 19:47:03.227027] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:13:08.062 [2024-11-26 19:47:03.227031] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:13:08.062 [2024-11-26 19:47:03.227036] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:08.062 [2024-11-26 19:47:03.227039] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:08.062 [2024-11-26 19:47:03.227041] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12e8750) 00:13:08.062 [2024-11-26 19:47:03.227046] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:08.062 [2024-11-26 19:47:03.227057] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x134cd40, cid 4, qid 0 00:13:08.062 [2024-11-26 19:47:03.227096] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:08.062 [2024-11-26 19:47:03.227105] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:08.062 [2024-11-26 19:47:03.227108] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:08.062 [2024-11-26 19:47:03.227110] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x134cd40) on tqpair=0x12e8750 00:13:08.062 [2024-11-26 19:47:03.227170] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:13:08.062 [2024-11-26 19:47:03.227183] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:13:08.062 [2024-11-26 19:47:03.227189] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:08.062 [2024-11-26 19:47:03.227192] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12e8750) 00:13:08.062 [2024-11-26 19:47:03.227197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:08.062 [2024-11-26 19:47:03.227208] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x134cd40, cid 4, qid 0 00:13:08.062 [2024-11-26 19:47:03.227273] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:08.062 [2024-11-26 19:47:03.227279] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:08.062 [2024-11-26 19:47:03.227281] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:08.062 [2024-11-26 19:47:03.227284] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12e8750): datao=0, datal=4096, cccid=4 00:13:08.062 [2024-11-26 19:47:03.227287] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x134cd40) on tqpair(0x12e8750): expected_datao=0, payload_size=4096 00:13:08.062 [2024-11-26 19:47:03.227290] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:08.062 [2024-11-26 19:47:03.227296] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:08.062 [2024-11-26 19:47:03.227298] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:08.062 [2024-11-26 19:47:03.227304] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:08.062 [2024-11-26 19:47:03.227309] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:08.062 [2024-11-26 19:47:03.227311] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:08.062 [2024-11-26 19:47:03.227314] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x134cd40) on tqpair=0x12e8750 00:13:08.062 [2024-11-26 19:47:03.227321] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:13:08.062 [2024-11-26 19:47:03.227329] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:13:08.062 [2024-11-26 19:47:03.227336] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:13:08.062 [2024-11-26 19:47:03.227341] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:08.062 [2024-11-26 19:47:03.227344] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12e8750) 00:13:08.062 [2024-11-26 19:47:03.227349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:08.062 [2024-11-26 19:47:03.227359] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x134cd40, cid 4, qid 0 00:13:08.062 [2024-11-26 19:47:03.227451] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:08.062 [2024-11-26 19:47:03.227462] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:08.062 [2024-11-26 19:47:03.227464] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:08.062 [2024-11-26 19:47:03.227467] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12e8750): datao=0, datal=4096, cccid=4 00:13:08.062 [2024-11-26 19:47:03.227470] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x134cd40) on tqpair(0x12e8750): expected_datao=0, payload_size=4096 00:13:08.062 [2024-11-26 19:47:03.227473] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:08.062 [2024-11-26 19:47:03.227478] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:08.062 [2024-11-26 19:47:03.227481] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:08.062 [2024-11-26 19:47:03.227487] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:08.062 [2024-11-26 19:47:03.227491] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:08.062 [2024-11-26 19:47:03.227494] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:08.062 [2024-11-26 19:47:03.227496] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x134cd40) on tqpair=0x12e8750 00:13:08.062 [2024-11-26 19:47:03.227507] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:08.062 [2024-11-26 19:47:03.227514] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:08.062 [2024-11-26 19:47:03.227519] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:08.062 [2024-11-26 19:47:03.227522] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12e8750) 00:13:08.062 [2024-11-26 19:47:03.227527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:08.062 [2024-11-26 19:47:03.227538] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x134cd40, cid 4, qid 0 00:13:08.062 [2024-11-26 19:47:03.227591] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:08.062 [2024-11-26 19:47:03.227600] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:08.062 [2024-11-26 19:47:03.227603] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:08.062 [2024-11-26 19:47:03.227605] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12e8750): datao=0, datal=4096, cccid=4 00:13:08.062 [2024-11-26 19:47:03.227608] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x134cd40) on tqpair(0x12e8750): expected_datao=0, payload_size=4096 00:13:08.062 [2024-11-26 19:47:03.227611] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:08.062 [2024-11-26 19:47:03.227616] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:08.062 [2024-11-26 19:47:03.227619] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:08.062 [2024-11-26 19:47:03.227625] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:08.062 [2024-11-26 19:47:03.227630] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:08.062 [2024-11-26 19:47:03.227632] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:08.062 [2024-11-26 19:47:03.227635] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x134cd40) on tqpair=0x12e8750 00:13:08.062 [2024-11-26 19:47:03.227640] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:08.062 [2024-11-26 19:47:03.227761] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:13:08.062 [2024-11-26 19:47:03.227780] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:13:08.062 [2024-11-26 19:47:03.227785] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:13:08.063 [2024-11-26 19:47:03.227789] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:08.063 [2024-11-26 19:47:03.227793] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:13:08.063 [2024-11-26 19:47:03.227797] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:13:08.063 [2024-11-26 19:47:03.227800] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:13:08.063 [2024-11-26 19:47:03.227804] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:13:08.063 [2024-11-26 19:47:03.227818] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:08.063 [2024-11-26 19:47:03.227820] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12e8750) 00:13:08.063 [2024-11-26 19:47:03.227826] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:08.063 [2024-11-26 19:47:03.227832] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:08.063 [2024-11-26 19:47:03.227834] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:08.063 [2024-11-26 19:47:03.227837] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x12e8750) 00:13:08.063 [2024-11-26 19:47:03.227842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:13:08.063 [2024-11-26 19:47:03.227858] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x134cd40, cid 4, qid 0 00:13:08.063 [2024-11-26 19:47:03.227862] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x134cec0, cid 5, qid 0 00:13:08.063 [2024-11-26 19:47:03.227926] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:08.063 [2024-11-26 19:47:03.227935] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:08.063 [2024-11-26 19:47:03.227938] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:08.063 [2024-11-26 19:47:03.227941] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x134cd40) on tqpair=0x12e8750 00:13:08.063 [2024-11-26 19:47:03.227946] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:08.063 [2024-11-26 19:47:03.227951] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:08.063 [2024-11-26 19:47:03.227953] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:08.063 [2024-11-26 19:47:03.227956] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x134cec0) on tqpair=0x12e8750 00:13:08.063 [2024-11-26 19:47:03.227963] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:08.063 [2024-11-26 19:47:03.227966] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x12e8750) 00:13:08.063 [2024-11-26 19:47:03.227970] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:08.063 [2024-11-26 19:47:03.227981] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x134cec0, cid 5, qid 0 00:13:08.063 [2024-11-26 19:47:03.228035] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:08.063 [2024-11-26 19:47:03.228043] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:08.063 [2024-11-26 19:47:03.228046] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:08.063 [2024-11-26 19:47:03.228049] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x134cec0) on tqpair=0x12e8750 00:13:08.063 [2024-11-26 19:47:03.228056] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:08.063 [2024-11-26 19:47:03.228058] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x12e8750) 00:13:08.063 [2024-11-26 19:47:03.228063] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:08.063 [2024-11-26 19:47:03.228073] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x134cec0, cid 5, qid 0 00:13:08.063 [2024-11-26 19:47:03.228114] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:08.063 [2024-11-26 19:47:03.228118] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:08.063 [2024-11-26 19:47:03.228121] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:08.063 [2024-11-26 19:47:03.228123] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x134cec0) on tqpair=0x12e8750 00:13:08.063 [2024-11-26 19:47:03.228130] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:08.063 [2024-11-26 19:47:03.228133] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x12e8750) 00:13:08.063 [2024-11-26 19:47:03.228138] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:08.063 [2024-11-26 19:47:03.228147] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x134cec0, cid 5, qid 0 00:13:08.063 [2024-11-26 19:47:03.228190] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:08.063 [2024-11-26 19:47:03.228198] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:08.063 [2024-11-26 19:47:03.228201] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:08.063 [2024-11-26 19:47:03.228203] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x134cec0) on tqpair=0x12e8750 00:13:08.063 [2024-11-26 19:47:03.228216] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:08.063 [2024-11-26 19:47:03.228219] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x12e8750) 00:13:08.063 [2024-11-26 19:47:03.228224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:08.063 [2024-11-26 19:47:03.228230] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:08.063 [2024-11-26 19:47:03.228233] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12e8750) 00:13:08.063 [2024-11-26 19:47:03.228238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:08.063 [2024-11-26 19:47:03.228244] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:08.063 [2024-11-26 19:47:03.228246] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x12e8750) 00:13:08.063 [2024-11-26 19:47:03.228251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:08.063 [2024-11-26 19:47:03.228258] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:08.063 [2024-11-26 19:47:03.228261] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x12e8750) 00:13:08.063 [2024-11-26 19:47:03.228265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:08.063 [2024-11-26 19:47:03.228277] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x134cec0, cid 5, qid 0 00:13:08.063 [2024-11-26 19:47:03.228281] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x134cd40, cid 4, qid 0 00:13:08.063 [2024-11-26 19:47:03.228284] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x134d040, cid 6, qid 0 00:13:08.063 [2024-11-26 19:47:03.228288] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x134d1c0, cid 7, qid 0 00:13:08.063 [2024-11-26 19:47:03.228428] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:08.063 [2024-11-26 19:47:03.228439] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:08.063 [2024-11-26 19:47:03.228442] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:08.063 [2024-11-26 19:47:03.228444] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12e8750): datao=0, datal=8192, cccid=5 00:13:08.063 [2024-11-26 19:47:03.228447] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x134cec0) on tqpair(0x12e8750): expected_datao=0, payload_size=8192 00:13:08.063 [2024-11-26 19:47:03.228450] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:08.063 [2024-11-26 19:47:03.228463] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:08.063 [2024-11-26 19:47:03.228466] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:08.063 [2024-11-26 19:47:03.228470] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:08.063 [2024-11-26 19:47:03.228475] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:08.063 [2024-11-26 19:47:03.228477] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:08.063 [2024-11-26 19:47:03.228479] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12e8750): datao=0, datal=512, cccid=4 00:13:08.063 [2024-11-26 19:47:03.228483] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x134cd40) on tqpair(0x12e8750): expected_datao=0, payload_size=512 00:13:08.064 [2024-11-26 19:47:03.228485] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:08.064 [2024-11-26 19:47:03.228491] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:08.064 [2024-11-26 19:47:03.228493] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:08.064 [2024-11-26 19:47:03.228497] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:08.064 [2024-11-26 19:47:03.228502] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:08.064 [2024-11-26 19:47:03.228504] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:08.064 [2024-11-26 19:47:03.228506] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12e8750): datao=0, datal=512, cccid=6 00:13:08.064 [2024-11-26 19:47:03.228509] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x134d040) on tqpair(0x12e8750): expected_datao=0, payload_size=512 00:13:08.064 [2024-11-26 19:47:03.228512] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:08.064 [2024-11-26 19:47:03.228517] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:08.064 [2024-11-26 19:47:03.228520] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:08.064 [2024-11-26 19:47:03.228524] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:08.064 [2024-11-26 19:47:03.228528] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:08.064 [2024-11-26 19:47:03.228531] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:08.064 [2024-11-26 19:47:03.228533] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12e8750): datao=0, datal=4096, cccid=7 00:13:08.064 [2024-11-26 19:47:03.228536] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x134d1c0) on tqpair(0x12e8750): expected_datao=0, payload_size=4096 00:13:08.064 [2024-11-26 19:47:03.228539] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:08.064 [2024-11-26 19:47:03.228544] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:08.064 [2024-11-26 19:47:03.228547] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:08.064 [2024-11-26 19:47:03.228553] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:08.064 [2024-11-26 19:47:03.228557] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:08.064 [2024-11-26 19:47:03.228560] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:08.064 [2024-11-26 19:47:03.228562] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x134cec0) on tqpair=0x12e8750 00:13:08.064 [2024-11-26 19:47:03.228573] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:08.064 [2024-11-26 19:47:03.228577] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:08.064 [2024-11-26 19:47:03.228580] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:08.064 [2024-11-26 19:47:03.228582] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x134cd40) on tqpair=0x12e8750 00:13:08.064 [2024-11-26 19:47:03.228591] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:08.064 [2024-11-26 19:47:03.228596] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:08.064 [2024-11-26 19:47:03.228598] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:08.064 [2024-11-26 19:47:03.228600] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x134d040) on tqpair=0x12e8750 00:13:08.064 [2024-11-26 19:47:03.228606] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:08.064 [2024-11-26 19:47:03.228610] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:08.064 [2024-11-26 19:47:03.228613] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:08.064 [2024-11-26 19:47:03.228615] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x134d1c0) on tqpair=0x12e8750 00:13:08.064 ===================================================== 00:13:08.064 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:13:08.064 ===================================================== 00:13:08.064 Controller Capabilities/Features 00:13:08.064 ================================ 00:13:08.064 Vendor ID: 8086 00:13:08.064 Subsystem Vendor ID: 8086 00:13:08.064 Serial Number: SPDK00000000000001 00:13:08.064 Model Number: SPDK bdev Controller 00:13:08.064 Firmware Version: 25.01 00:13:08.064 Recommended Arb Burst: 6 00:13:08.064 IEEE OUI Identifier: e4 d2 5c 00:13:08.064 Multi-path I/O 00:13:08.064 May have multiple subsystem ports: Yes 00:13:08.064 May have multiple controllers: Yes 00:13:08.064 Associated with SR-IOV VF: No 00:13:08.064 Max Data Transfer Size: 131072 00:13:08.064 Max Number of Namespaces: 32 00:13:08.064 Max Number of I/O Queues: 127 00:13:08.064 NVMe Specification Version (VS): 1.3 00:13:08.064 NVMe Specification Version (Identify): 1.3 00:13:08.064 Maximum Queue Entries: 128 00:13:08.064 Contiguous Queues Required: Yes 00:13:08.064 Arbitration Mechanisms Supported 00:13:08.064 Weighted Round Robin: Not Supported 00:13:08.064 Vendor Specific: Not Supported 00:13:08.064 Reset Timeout: 15000 ms 00:13:08.064 Doorbell Stride: 4 bytes 00:13:08.064 NVM Subsystem Reset: Not Supported 00:13:08.064 Command Sets Supported 00:13:08.064 NVM Command Set: Supported 00:13:08.064 Boot Partition: Not Supported 00:13:08.064 Memory Page Size Minimum: 4096 bytes 00:13:08.064 Memory Page Size Maximum: 4096 bytes 00:13:08.064 Persistent Memory Region: Not Supported 00:13:08.064 Optional Asynchronous Events Supported 00:13:08.064 Namespace Attribute Notices: Supported 00:13:08.064 Firmware Activation Notices: Not Supported 00:13:08.064 ANA Change Notices: Not Supported 00:13:08.064 PLE Aggregate Log Change Notices: Not Supported 00:13:08.064 LBA Status Info Alert Notices: Not Supported 00:13:08.064 EGE Aggregate Log Change Notices: Not Supported 00:13:08.064 Normal NVM Subsystem Shutdown event: Not Supported 00:13:08.064 Zone Descriptor Change Notices: Not Supported 00:13:08.064 Discovery Log Change Notices: Not Supported 00:13:08.064 Controller Attributes 00:13:08.064 128-bit Host Identifier: Supported 00:13:08.064 Non-Operational Permissive Mode: Not Supported 00:13:08.064 NVM Sets: Not Supported 00:13:08.064 Read Recovery Levels: Not Supported 00:13:08.064 Endurance Groups: Not Supported 00:13:08.064 Predictable Latency Mode: Not Supported 00:13:08.064 Traffic Based Keep ALive: Not Supported 00:13:08.064 Namespace Granularity: Not Supported 00:13:08.064 SQ Associations: Not Supported 00:13:08.064 UUID List: Not Supported 00:13:08.064 Multi-Domain Subsystem: Not Supported 00:13:08.064 Fixed Capacity Management: Not Supported 00:13:08.064 Variable Capacity Management: Not Supported 00:13:08.064 Delete Endurance Group: Not Supported 00:13:08.064 Delete NVM Set: Not Supported 00:13:08.064 Extended LBA Formats Supported: Not Supported 00:13:08.064 Flexible Data Placement Supported: Not Supported 00:13:08.064 00:13:08.064 Controller Memory Buffer Support 00:13:08.064 ================================ 00:13:08.064 Supported: No 00:13:08.064 00:13:08.064 Persistent Memory Region Support 00:13:08.064 ================================ 00:13:08.064 Supported: No 00:13:08.064 00:13:08.064 Admin Command Set Attributes 00:13:08.064 ============================ 00:13:08.064 Security Send/Receive: Not Supported 00:13:08.064 Format NVM: Not Supported 00:13:08.064 Firmware Activate/Download: Not Supported 00:13:08.064 Namespace Management: Not Supported 00:13:08.065 Device Self-Test: Not Supported 00:13:08.065 Directives: Not Supported 00:13:08.065 NVMe-MI: Not Supported 00:13:08.065 Virtualization Management: Not Supported 00:13:08.065 Doorbell Buffer Config: Not Supported 00:13:08.065 Get LBA Status Capability: Not Supported 00:13:08.065 Command & Feature Lockdown Capability: Not Supported 00:13:08.065 Abort Command Limit: 4 00:13:08.065 Async Event Request Limit: 4 00:13:08.065 Number of Firmware Slots: N/A 00:13:08.065 Firmware Slot 1 Read-Only: N/A 00:13:08.065 Firmware Activation Without Reset: N/A 00:13:08.065 Multiple Update Detection Support: N/A 00:13:08.065 Firmware Update Granularity: No Information Provided 00:13:08.065 Per-Namespace SMART Log: No 00:13:08.065 Asymmetric Namespace Access Log Page: Not Supported 00:13:08.065 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:13:08.065 Command Effects Log Page: Supported 00:13:08.065 Get Log Page Extended Data: Supported 00:13:08.065 Telemetry Log Pages: Not Supported 00:13:08.065 Persistent Event Log Pages: Not Supported 00:13:08.065 Supported Log Pages Log Page: May Support 00:13:08.065 Commands Supported & Effects Log Page: Not Supported 00:13:08.065 Feature Identifiers & Effects Log Page:May Support 00:13:08.065 NVMe-MI Commands & Effects Log Page: May Support 00:13:08.065 Data Area 4 for Telemetry Log: Not Supported 00:13:08.065 Error Log Page Entries Supported: 128 00:13:08.065 Keep Alive: Supported 00:13:08.065 Keep Alive Granularity: 10000 ms 00:13:08.065 00:13:08.065 NVM Command Set Attributes 00:13:08.065 ========================== 00:13:08.065 Submission Queue Entry Size 00:13:08.065 Max: 64 00:13:08.065 Min: 64 00:13:08.065 Completion Queue Entry Size 00:13:08.065 Max: 16 00:13:08.065 Min: 16 00:13:08.065 Number of Namespaces: 32 00:13:08.065 Compare Command: Supported 00:13:08.065 Write Uncorrectable Command: Not Supported 00:13:08.065 Dataset Management Command: Supported 00:13:08.065 Write Zeroes Command: Supported 00:13:08.065 Set Features Save Field: Not Supported 00:13:08.065 Reservations: Supported 00:13:08.065 Timestamp: Not Supported 00:13:08.065 Copy: Supported 00:13:08.065 Volatile Write Cache: Present 00:13:08.065 Atomic Write Unit (Normal): 1 00:13:08.065 Atomic Write Unit (PFail): 1 00:13:08.065 Atomic Compare & Write Unit: 1 00:13:08.065 Fused Compare & Write: Supported 00:13:08.065 Scatter-Gather List 00:13:08.065 SGL Command Set: Supported 00:13:08.065 SGL Keyed: Supported 00:13:08.065 SGL Bit Bucket Descriptor: Not Supported 00:13:08.065 SGL Metadata Pointer: Not Supported 00:13:08.065 Oversized SGL: Not Supported 00:13:08.065 SGL Metadata Address: Not Supported 00:13:08.065 SGL Offset: Supported 00:13:08.065 Transport SGL Data Block: Not Supported 00:13:08.065 Replay Protected Memory Block: Not Supported 00:13:08.065 00:13:08.065 Firmware Slot Information 00:13:08.065 ========================= 00:13:08.065 Active slot: 1 00:13:08.065 Slot 1 Firmware Revision: 25.01 00:13:08.065 00:13:08.065 00:13:08.065 Commands Supported and Effects 00:13:08.065 ============================== 00:13:08.065 Admin Commands 00:13:08.065 -------------- 00:13:08.065 Get Log Page (02h): Supported 00:13:08.065 Identify (06h): Supported 00:13:08.065 Abort (08h): Supported 00:13:08.065 Set Features (09h): Supported 00:13:08.065 Get Features (0Ah): Supported 00:13:08.065 Asynchronous Event Request (0Ch): Supported 00:13:08.065 Keep Alive (18h): Supported 00:13:08.065 I/O Commands 00:13:08.065 ------------ 00:13:08.065 Flush (00h): Supported LBA-Change 00:13:08.065 Write (01h): Supported LBA-Change 00:13:08.065 Read (02h): Supported 00:13:08.065 Compare (05h): Supported 00:13:08.065 Write Zeroes (08h): Supported LBA-Change 00:13:08.065 Dataset Management (09h): Supported LBA-Change 00:13:08.065 Copy (19h): Supported LBA-Change 00:13:08.065 00:13:08.065 Error Log 00:13:08.065 ========= 00:13:08.065 00:13:08.065 Arbitration 00:13:08.065 =========== 00:13:08.065 Arbitration Burst: 1 00:13:08.065 00:13:08.065 Power Management 00:13:08.065 ================ 00:13:08.065 Number of Power States: 1 00:13:08.065 Current Power State: Power State #0 00:13:08.065 Power State #0: 00:13:08.065 Max Power: 0.00 W 00:13:08.065 Non-Operational State: Operational 00:13:08.065 Entry Latency: Not Reported 00:13:08.065 Exit Latency: Not Reported 00:13:08.065 Relative Read Throughput: 0 00:13:08.065 Relative Read Latency: 0 00:13:08.065 Relative Write Throughput: 0 00:13:08.065 Relative Write Latency: 0 00:13:08.065 Idle Power: Not Reported 00:13:08.065 Active Power: Not Reported 00:13:08.065 Non-Operational Permissive Mode: Not Supported 00:13:08.065 00:13:08.065 Health Information 00:13:08.065 ================== 00:13:08.065 Critical Warnings: 00:13:08.065 Available Spare Space: OK 00:13:08.065 Temperature: OK 00:13:08.065 Device Reliability: OK 00:13:08.065 Read Only: No 00:13:08.065 Volatile Memory Backup: OK 00:13:08.065 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:08.065 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:08.065 Available Spare: 0% 00:13:08.065 Available Spare Threshold: 0% 00:13:08.065 Life Percentage Used:[2024-11-26 19:47:03.228701] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:08.065 [2024-11-26 19:47:03.228705] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x12e8750) 00:13:08.065 [2024-11-26 19:47:03.228710] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:08.065 [2024-11-26 19:47:03.228722] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x134d1c0, cid 7, qid 0 00:13:08.065 [2024-11-26 19:47:03.228789] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:08.065 [2024-11-26 19:47:03.228794] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:08.065 [2024-11-26 19:47:03.228797] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:08.065 [2024-11-26 19:47:03.228799] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x134d1c0) on tqpair=0x12e8750 00:13:08.065 [2024-11-26 19:47:03.228827] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:13:08.065 [2024-11-26 19:47:03.228834] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x134c740) on tqpair=0x12e8750 00:13:08.066 [2024-11-26 19:47:03.228839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:08.066 [2024-11-26 19:47:03.228842] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x134c8c0) on tqpair=0x12e8750 00:13:08.066 [2024-11-26 19:47:03.228845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:08.066 [2024-11-26 19:47:03.228849] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x134ca40) on tqpair=0x12e8750 00:13:08.066 [2024-11-26 19:47:03.228852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:08.066 [2024-11-26 19:47:03.228856] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x134cbc0) on tqpair=0x12e8750 00:13:08.066 [2024-11-26 19:47:03.228859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:08.066 [2024-11-26 19:47:03.228866] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:08.066 [2024-11-26 19:47:03.228868] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:08.066 [2024-11-26 19:47:03.228871] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12e8750) 00:13:08.066 [2024-11-26 19:47:03.228876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:08.066 [2024-11-26 19:47:03.228889] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x134cbc0, cid 3, qid 0 00:13:08.066 [2024-11-26 19:47:03.228934] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:08.066 [2024-11-26 19:47:03.228940] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:08.066 [2024-11-26 19:47:03.228942] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:08.066 [2024-11-26 19:47:03.228945] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x134cbc0) on tqpair=0x12e8750 00:13:08.066 [2024-11-26 19:47:03.228951] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:08.066 [2024-11-26 19:47:03.228953] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:08.066 [2024-11-26 19:47:03.228956] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12e8750) 00:13:08.066 [2024-11-26 19:47:03.228961] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:08.066 [2024-11-26 19:47:03.228973] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x134cbc0, cid 3, qid 0 00:13:08.066 [2024-11-26 19:47:03.229035] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:08.066 [2024-11-26 19:47:03.229040] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:08.066 [2024-11-26 19:47:03.229042] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:08.066 [2024-11-26 19:47:03.229045] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x134cbc0) on tqpair=0x12e8750 00:13:08.066 [2024-11-26 19:47:03.229048] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:13:08.066 [2024-11-26 19:47:03.229052] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:13:08.066 [2024-11-26 19:47:03.229058] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:08.066 [2024-11-26 19:47:03.229061] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:08.066 [2024-11-26 19:47:03.229064] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12e8750) 00:13:08.066 [2024-11-26 19:47:03.229069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:08.066 [2024-11-26 19:47:03.229078] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x134cbc0, cid 3, qid 0 00:13:08.066 [2024-11-26 19:47:03.229112] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:08.066 [2024-11-26 19:47:03.229121] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:08.066 [2024-11-26 19:47:03.229124] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:08.066 [2024-11-26 19:47:03.229126] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x134cbc0) on tqpair=0x12e8750 00:13:08.066 [2024-11-26 19:47:03.229134] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:08.066 [2024-11-26 19:47:03.229137] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:08.066 [2024-11-26 19:47:03.229139] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12e8750) 00:13:08.066 [2024-11-26 19:47:03.229145] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:08.066 [2024-11-26 19:47:03.229154] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x134cbc0, cid 3, qid 0 00:13:08.066 [2024-11-26 19:47:03.229196] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:08.066 [2024-11-26 19:47:03.229204] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:08.066 [2024-11-26 19:47:03.229207] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:08.066 [2024-11-26 19:47:03.229210] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x134cbc0) on tqpair=0x12e8750 00:13:08.066 [2024-11-26 19:47:03.229217] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:08.066 [2024-11-26 19:47:03.229220] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:08.066 [2024-11-26 19:47:03.229222] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12e8750) 00:13:08.066 [2024-11-26 19:47:03.229228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:08.066 [2024-11-26 19:47:03.229238] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x134cbc0, cid 3, qid 0 00:13:08.066 [2024-11-26 19:47:03.229277] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:08.066 [2024-11-26 19:47:03.229282] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:08.066 [2024-11-26 19:47:03.229284] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:08.066 [2024-11-26 19:47:03.229287] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x134cbc0) on tqpair=0x12e8750 00:13:08.066 [2024-11-26 19:47:03.229294] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:08.066 [2024-11-26 19:47:03.229296] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:08.066 [2024-11-26 19:47:03.229299] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12e8750) 00:13:08.066 [2024-11-26 19:47:03.229304] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:08.066 [2024-11-26 19:47:03.229314] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x134cbc0, cid 3, qid 0 00:13:08.066 [2024-11-26 19:47:03.229360] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:08.066 [2024-11-26 19:47:03.229369] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:08.066 [2024-11-26 19:47:03.229371] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:08.066 [2024-11-26 19:47:03.229374] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x134cbc0) on tqpair=0x12e8750 00:13:08.066 [2024-11-26 19:47:03.229381] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:08.066 [2024-11-26 19:47:03.229384] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:08.066 [2024-11-26 19:47:03.229387] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12e8750) 00:13:08.066 [2024-11-26 19:47:03.229392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:08.066 [2024-11-26 19:47:03.229402] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x134cbc0, cid 3, qid 0 00:13:08.067 [2024-11-26 19:47:03.229448] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:08.067 [2024-11-26 19:47:03.229453] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:08.067 [2024-11-26 19:47:03.229455] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:08.067 [2024-11-26 19:47:03.229458] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x134cbc0) on tqpair=0x12e8750 00:13:08.067 [2024-11-26 19:47:03.229465] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:08.067 [2024-11-26 19:47:03.229468] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:08.067 [2024-11-26 19:47:03.229470] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12e8750) 00:13:08.067 [2024-11-26 19:47:03.229475] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:08.067 [2024-11-26 19:47:03.229485] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x134cbc0, cid 3, qid 0 00:13:08.067 [2024-11-26 19:47:03.229519] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:08.067 [2024-11-26 19:47:03.229524] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:08.067 [2024-11-26 19:47:03.229527] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:08.067 [2024-11-26 19:47:03.229529] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x134cbc0) on tqpair=0x12e8750 00:13:08.067 [2024-11-26 19:47:03.229537] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:08.067 [2024-11-26 19:47:03.229540] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:08.067 [2024-11-26 19:47:03.229542] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12e8750) 00:13:08.067 [2024-11-26 19:47:03.229548] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:08.067 [2024-11-26 19:47:03.229557] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x134cbc0, cid 3, qid 0 00:13:08.067 [2024-11-26 19:47:03.229603] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:08.067 [2024-11-26 19:47:03.229608] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:08.067 [2024-11-26 19:47:03.229610] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:08.067 [2024-11-26 19:47:03.229613] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x134cbc0) on tqpair=0x12e8750 00:13:08.067 [2024-11-26 19:47:03.229620] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:08.067 [2024-11-26 19:47:03.229623] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:08.067 [2024-11-26 19:47:03.229625] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12e8750) 00:13:08.067 [2024-11-26 19:47:03.229631] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:08.067 [2024-11-26 19:47:03.229640] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x134cbc0, cid 3, qid 0 00:13:08.067 [2024-11-26 19:47:03.229675] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:08.067 [2024-11-26 19:47:03.229680] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:08.067 [2024-11-26 19:47:03.229683] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:08.067 [2024-11-26 19:47:03.229686] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x134cbc0) on tqpair=0x12e8750 00:13:08.067 [2024-11-26 19:47:03.229693] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:08.067 [2024-11-26 19:47:03.229696] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:08.067 [2024-11-26 19:47:03.229698] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12e8750) 00:13:08.067 [2024-11-26 19:47:03.229703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:08.067 [2024-11-26 19:47:03.229713] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x134cbc0, cid 3, qid 0 00:13:08.067 [2024-11-26 19:47:03.229759] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:08.067 [2024-11-26 19:47:03.233778] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:08.067 [2024-11-26 19:47:03.233791] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:08.067 [2024-11-26 19:47:03.233794] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x134cbc0) on tqpair=0x12e8750 00:13:08.067 [2024-11-26 19:47:03.233803] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:08.067 [2024-11-26 19:47:03.233806] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:08.067 [2024-11-26 19:47:03.233809] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12e8750) 00:13:08.067 [2024-11-26 19:47:03.233814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:08.067 [2024-11-26 19:47:03.233829] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x134cbc0, cid 3, qid 0 00:13:08.067 [2024-11-26 19:47:03.233872] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:08.067 [2024-11-26 19:47:03.233877] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:08.067 [2024-11-26 19:47:03.233879] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:08.067 [2024-11-26 19:47:03.233882] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x134cbc0) on tqpair=0x12e8750 00:13:08.067 [2024-11-26 19:47:03.233887] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:13:08.067 0% 00:13:08.067 Data Units Read: 0 00:13:08.067 Data Units Written: 0 00:13:08.067 Host Read Commands: 0 00:13:08.067 Host Write Commands: 0 00:13:08.067 Controller Busy Time: 0 minutes 00:13:08.067 Power Cycles: 0 00:13:08.067 Power On Hours: 0 hours 00:13:08.067 Unsafe Shutdowns: 0 00:13:08.067 Unrecoverable Media Errors: 0 00:13:08.067 Lifetime Error Log Entries: 0 00:13:08.067 Warning Temperature Time: 0 minutes 00:13:08.067 Critical Temperature Time: 0 minutes 00:13:08.067 00:13:08.067 Number of Queues 00:13:08.067 ================ 00:13:08.067 Number of I/O Submission Queues: 127 00:13:08.067 Number of I/O Completion Queues: 127 00:13:08.067 00:13:08.067 Active Namespaces 00:13:08.067 ================= 00:13:08.067 Namespace ID:1 00:13:08.067 Error Recovery Timeout: Unlimited 00:13:08.067 Command Set Identifier: NVM (00h) 00:13:08.067 Deallocate: Supported 00:13:08.067 Deallocated/Unwritten Error: Not Supported 00:13:08.067 Deallocated Read Value: Unknown 00:13:08.067 Deallocate in Write Zeroes: Not Supported 00:13:08.067 Deallocated Guard Field: 0xFFFF 00:13:08.067 Flush: Supported 00:13:08.067 Reservation: Supported 00:13:08.067 Namespace Sharing Capabilities: Multiple Controllers 00:13:08.067 Size (in LBAs): 131072 (0GiB) 00:13:08.067 Capacity (in LBAs): 131072 (0GiB) 00:13:08.067 Utilization (in LBAs): 131072 (0GiB) 00:13:08.067 NGUID: ABCDEF0123456789ABCDEF0123456789 00:13:08.067 EUI64: ABCDEF0123456789 00:13:08.067 UUID: 606a7f72-657a-4957-b297-d5d9d024de94 00:13:08.067 Thin Provisioning: Not Supported 00:13:08.067 Per-NS Atomic Units: Yes 00:13:08.067 Atomic Boundary Size (Normal): 0 00:13:08.067 Atomic Boundary Size (PFail): 0 00:13:08.067 Atomic Boundary Offset: 0 00:13:08.067 Maximum Single Source Range Length: 65535 00:13:08.067 Maximum Copy Length: 65535 00:13:08.067 Maximum Source Range Count: 1 00:13:08.067 NGUID/EUI64 Never Reused: No 00:13:08.067 Namespace Write Protected: No 00:13:08.067 Number of LBA Formats: 1 00:13:08.067 Current LBA Format: LBA Format #00 00:13:08.067 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:08.067 00:13:08.067 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:13:08.326 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:08.326 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.326 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:08.326 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.326 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:13:08.326 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:13:08.326 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:08.326 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:13:08.326 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:08.326 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:13:08.326 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:08.326 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:08.326 rmmod nvme_tcp 00:13:08.326 rmmod nvme_fabrics 00:13:08.326 rmmod nvme_keyring 00:13:08.326 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:08.326 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:13:08.326 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:13:08.326 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 72704 ']' 00:13:08.326 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 72704 00:13:08.326 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 72704 ']' 00:13:08.326 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 72704 00:13:08.326 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:13:08.326 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:08.326 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72704 00:13:08.583 killing process with pid 72704 00:13:08.584 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:08.584 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:08.584 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72704' 00:13:08.584 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 72704 00:13:08.584 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 72704 00:13:08.584 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:08.584 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:08.584 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:08.584 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:13:08.584 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:13:08.584 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:13:08.584 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:08.584 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:08.584 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:08.584 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:08.584 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:08.584 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:08.584 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:08.584 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:08.584 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:08.584 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:08.584 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:08.584 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:08.843 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:08.843 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:08.843 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:08.843 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:08.843 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:08.843 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:08.843 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:08.843 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.843 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:13:08.843 00:13:08.843 real 0m1.973s 00:13:08.843 user 0m4.542s 00:13:08.843 sys 0m0.526s 00:13:08.843 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:08.843 ************************************ 00:13:08.843 END TEST nvmf_identify 00:13:08.843 ************************************ 00:13:08.843 19:47:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:13:08.843 19:47:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:13:08.843 19:47:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:08.843 19:47:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:08.843 19:47:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:13:08.843 ************************************ 00:13:08.843 START TEST nvmf_perf 00:13:08.843 ************************************ 00:13:08.843 19:47:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:13:08.843 * Looking for test storage... 00:13:08.843 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:13:08.843 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:08.843 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:08.843 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:13:09.101 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:09.101 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:09.101 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:09.101 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:09.101 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:13:09.101 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:13:09.101 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:13:09.101 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:13:09.101 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:13:09.101 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:13:09.101 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:13:09.101 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:09.101 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:13:09.101 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:13:09.101 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:09.101 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:09.101 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:13:09.101 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:13:09.101 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:09.101 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:13:09.101 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:13:09.101 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:13:09.101 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:13:09.101 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:09.101 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:13:09.101 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:13:09.101 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:09.101 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:09.101 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:13:09.101 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:09.101 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:09.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.101 --rc genhtml_branch_coverage=1 00:13:09.101 --rc genhtml_function_coverage=1 00:13:09.101 --rc genhtml_legend=1 00:13:09.101 --rc geninfo_all_blocks=1 00:13:09.101 --rc geninfo_unexecuted_blocks=1 00:13:09.101 00:13:09.101 ' 00:13:09.101 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:09.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.101 --rc genhtml_branch_coverage=1 00:13:09.101 --rc genhtml_function_coverage=1 00:13:09.101 --rc genhtml_legend=1 00:13:09.101 --rc geninfo_all_blocks=1 00:13:09.101 --rc geninfo_unexecuted_blocks=1 00:13:09.101 00:13:09.101 ' 00:13:09.101 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:09.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.101 --rc genhtml_branch_coverage=1 00:13:09.101 --rc genhtml_function_coverage=1 00:13:09.101 --rc genhtml_legend=1 00:13:09.101 --rc geninfo_all_blocks=1 00:13:09.101 --rc geninfo_unexecuted_blocks=1 00:13:09.101 00:13:09.101 ' 00:13:09.101 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:09.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.101 --rc genhtml_branch_coverage=1 00:13:09.101 --rc genhtml_function_coverage=1 00:13:09.101 --rc genhtml_legend=1 00:13:09.101 --rc geninfo_all_blocks=1 00:13:09.101 --rc geninfo_unexecuted_blocks=1 00:13:09.101 00:13:09.101 ' 00:13:09.101 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:09.101 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:13:09.101 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:09.101 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:09.101 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:09.101 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:09.101 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:09.101 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:09.101 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:09.101 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:09.101 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:09.101 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:09.101 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:13:09.101 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=91838eb1-5852-43eb-90b2-09876f360ab2 00:13:09.101 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:09.101 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:09.101 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:09.101 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:09.102 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:09.102 Cannot find device "nvmf_init_br" 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:09.102 Cannot find device "nvmf_init_br2" 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:09.102 Cannot find device "nvmf_tgt_br" 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:09.102 Cannot find device "nvmf_tgt_br2" 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:09.102 Cannot find device "nvmf_init_br" 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:09.102 Cannot find device "nvmf_init_br2" 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:09.102 Cannot find device "nvmf_tgt_br" 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:09.102 Cannot find device "nvmf_tgt_br2" 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:09.102 Cannot find device "nvmf_br" 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:09.102 Cannot find device "nvmf_init_if" 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:09.102 Cannot find device "nvmf_init_if2" 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:09.102 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:09.102 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:09.102 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:09.103 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:09.103 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:09.103 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:09.103 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:09.103 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:09.103 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:09.103 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:09.103 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:09.103 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:09.103 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:09.103 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:09.103 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:09.103 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:09.103 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:09.103 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:09.103 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:09.362 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:09.362 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:09.362 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:09.362 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:09.362 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:09.362 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:09.362 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:09.362 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:09.362 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:09.363 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:09.363 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:09.363 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:09.363 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:09.363 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:09.363 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:09.363 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:09.363 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:13:09.363 00:13:09.363 --- 10.0.0.3 ping statistics --- 00:13:09.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.363 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:13:09.363 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:09.363 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:09.363 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.071 ms 00:13:09.363 00:13:09.363 --- 10.0.0.4 ping statistics --- 00:13:09.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.363 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:13:09.363 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:09.363 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:09.363 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:13:09.363 00:13:09.363 --- 10.0.0.1 ping statistics --- 00:13:09.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.363 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:13:09.363 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:09.363 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:09.363 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:13:09.363 00:13:09.363 --- 10.0.0.2 ping statistics --- 00:13:09.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.363 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:13:09.363 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:09.363 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:13:09.363 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:09.363 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:09.363 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:09.363 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:09.363 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:09.363 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:09.363 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:09.363 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:13:09.363 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:09.363 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:09.363 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:13:09.363 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=72952 00:13:09.363 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 72952 00:13:09.363 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 72952 ']' 00:13:09.363 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.363 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:09.363 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:09.363 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.363 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:09.363 19:47:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:13:09.363 [2024-11-26 19:47:04.498870] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:13:09.363 [2024-11-26 19:47:04.498930] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:09.624 [2024-11-26 19:47:04.640494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:09.624 [2024-11-26 19:47:04.679374] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:09.624 [2024-11-26 19:47:04.679418] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:09.624 [2024-11-26 19:47:04.679426] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:09.624 [2024-11-26 19:47:04.679432] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:09.624 [2024-11-26 19:47:04.679438] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:09.624 [2024-11-26 19:47:04.680169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:09.624 [2024-11-26 19:47:04.680670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:09.624 [2024-11-26 19:47:04.680948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:09.624 [2024-11-26 19:47:04.681083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.624 [2024-11-26 19:47:04.714433] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:10.211 19:47:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:10.211 19:47:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:13:10.211 19:47:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:10.211 19:47:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:10.211 19:47:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:13:10.211 19:47:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:10.211 19:47:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:10.211 19:47:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:13:10.781 19:47:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:13:10.781 19:47:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:13:10.781 19:47:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:13:10.781 19:47:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:11.041 19:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:13:11.041 19:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:13:11.041 19:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:13:11.041 19:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:13:11.041 19:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:11.303 [2024-11-26 19:47:06.402086] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:11.303 19:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:11.562 19:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:13:11.562 19:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:11.822 19:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:13:11.822 19:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:13:11.822 19:47:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:12.083 [2024-11-26 19:47:07.243175] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:12.083 19:47:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:13:12.342 19:47:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:13:12.342 19:47:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:13:12.342 19:47:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:13:12.342 19:47:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:13:13.719 Initializing NVMe Controllers 00:13:13.719 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:13.719 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:13:13.719 Initialization complete. Launching workers. 00:13:13.719 ======================================================== 00:13:13.719 Latency(us) 00:13:13.719 Device Information : IOPS MiB/s Average min max 00:13:13.719 PCIE (0000:00:10.0) NSID 1 from core 0: 32986.03 128.85 969.82 233.14 6863.68 00:13:13.719 ======================================================== 00:13:13.719 Total : 32986.03 128.85 969.82 233.14 6863.68 00:13:13.719 00:13:13.719 19:47:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:13:14.651 Initializing NVMe Controllers 00:13:14.651 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:13:14.651 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:14.651 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:14.651 Initialization complete. Launching workers. 00:13:14.651 ======================================================== 00:13:14.651 Latency(us) 00:13:14.651 Device Information : IOPS MiB/s Average min max 00:13:14.651 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5098.66 19.92 195.87 77.17 4174.74 00:13:14.651 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 106.57 0.42 9681.68 6979.15 135997.98 00:13:14.651 ======================================================== 00:13:14.651 Total : 5205.24 20.33 390.09 77.17 135997.98 00:13:14.651 00:13:14.908 19:47:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:13:16.280 Initializing NVMe Controllers 00:13:16.280 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:13:16.280 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:16.280 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:16.280 Initialization complete. Launching workers. 00:13:16.280 ======================================================== 00:13:16.280 Latency(us) 00:13:16.280 Device Information : IOPS MiB/s Average min max 00:13:16.280 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9723.83 37.98 3293.82 412.70 8721.58 00:13:16.280 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2997.41 11.71 10689.29 5283.39 216634.81 00:13:16.280 ======================================================== 00:13:16.280 Total : 12721.23 49.69 5036.36 412.70 216634.81 00:13:16.280 00:13:16.280 19:47:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:13:16.280 19:47:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:13:18.814 Initializing NVMe Controllers 00:13:18.814 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:13:18.814 Controller IO queue size 128, less than required. 00:13:18.814 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:18.814 Controller IO queue size 128, less than required. 00:13:18.814 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:18.814 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:18.814 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:18.814 Initialization complete. Launching workers. 00:13:18.814 ======================================================== 00:13:18.814 Latency(us) 00:13:18.814 Device Information : IOPS MiB/s Average min max 00:13:18.814 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2449.89 612.47 52868.94 30001.12 86828.51 00:13:18.814 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 672.47 168.12 195443.91 63046.95 348717.01 00:13:18.814 ======================================================== 00:13:18.814 Total : 3122.37 780.59 83575.64 30001.12 348717.01 00:13:18.814 00:13:18.814 19:47:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:13:19.070 Initializing NVMe Controllers 00:13:19.071 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:13:19.071 Controller IO queue size 128, less than required. 00:13:19.071 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:19.071 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:13:19.071 Controller IO queue size 128, less than required. 00:13:19.071 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:19.071 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:13:19.071 WARNING: Some requested NVMe devices were skipped 00:13:19.071 No valid NVMe controllers or AIO or URING devices found 00:13:19.071 19:47:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:13:21.596 Initializing NVMe Controllers 00:13:21.596 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:13:21.596 Controller IO queue size 128, less than required. 00:13:21.596 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:21.596 Controller IO queue size 128, less than required. 00:13:21.596 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:21.596 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:21.596 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:21.596 Initialization complete. Launching workers. 00:13:21.596 00:13:21.596 ==================== 00:13:21.596 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:13:21.596 TCP transport: 00:13:21.596 polls: 12905 00:13:21.596 idle_polls: 6155 00:13:21.596 sock_completions: 6750 00:13:21.596 nvme_completions: 9391 00:13:21.596 submitted_requests: 14050 00:13:21.596 queued_requests: 1 00:13:21.596 00:13:21.596 ==================== 00:13:21.596 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:13:21.596 TCP transport: 00:13:21.596 polls: 13444 00:13:21.596 idle_polls: 7765 00:13:21.596 sock_completions: 5679 00:13:21.596 nvme_completions: 8931 00:13:21.596 submitted_requests: 13462 00:13:21.596 queued_requests: 1 00:13:21.596 ======================================================== 00:13:21.596 Latency(us) 00:13:21.596 Device Information : IOPS MiB/s Average min max 00:13:21.596 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2347.20 586.80 55140.86 28025.36 94016.80 00:13:21.596 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2232.21 558.05 57469.78 18562.72 146313.99 00:13:21.596 ======================================================== 00:13:21.596 Total : 4579.41 1144.85 56276.08 18562.72 146313.99 00:13:21.596 00:13:21.596 19:47:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:13:22.530 19:47:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:22.530 19:47:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:13:22.530 19:47:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:13:22.530 19:47:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:13:22.530 19:47:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:22.530 19:47:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:13:22.530 19:47:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:22.530 19:47:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:13:22.530 19:47:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:22.530 19:47:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:22.530 rmmod nvme_tcp 00:13:22.530 rmmod nvme_fabrics 00:13:22.530 rmmod nvme_keyring 00:13:22.530 19:47:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:22.530 19:47:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:13:22.530 19:47:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:13:22.530 19:47:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 72952 ']' 00:13:22.530 19:47:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 72952 00:13:22.530 19:47:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 72952 ']' 00:13:22.530 19:47:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 72952 00:13:22.530 19:47:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:13:22.530 19:47:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:22.530 19:47:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72952 00:13:22.530 killing process with pid 72952 00:13:22.530 19:47:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:22.530 19:47:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:22.530 19:47:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72952' 00:13:22.530 19:47:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 72952 00:13:22.530 19:47:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 72952 00:13:29.083 19:47:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:29.083 19:47:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:29.083 19:47:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:29.083 19:47:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:13:29.083 19:47:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:13:29.083 19:47:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:29.083 19:47:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:13:29.083 19:47:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:29.083 19:47:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:29.083 19:47:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:29.083 19:47:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:29.083 19:47:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:29.083 19:47:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:29.083 19:47:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:29.083 19:47:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:29.083 19:47:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:29.083 19:47:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:29.083 19:47:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:29.084 19:47:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:29.084 19:47:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:29.084 19:47:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:29.084 19:47:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:29.084 19:47:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:29.084 19:47:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:29.084 19:47:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:29.084 19:47:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:29.084 19:47:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:13:29.084 00:13:29.084 real 0m19.887s 00:13:29.084 user 1m9.547s 00:13:29.084 sys 0m3.515s 00:13:29.084 19:47:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:29.084 19:47:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:13:29.084 ************************************ 00:13:29.084 END TEST nvmf_perf 00:13:29.084 ************************************ 00:13:29.084 19:47:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:13:29.084 19:47:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:29.084 19:47:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:29.084 19:47:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:13:29.084 ************************************ 00:13:29.084 START TEST nvmf_fio_host 00:13:29.084 ************************************ 00:13:29.084 19:47:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:13:29.084 * Looking for test storage... 00:13:29.084 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:13:29.084 19:47:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:29.084 19:47:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:13:29.084 19:47:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:29.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.084 --rc genhtml_branch_coverage=1 00:13:29.084 --rc genhtml_function_coverage=1 00:13:29.084 --rc genhtml_legend=1 00:13:29.084 --rc geninfo_all_blocks=1 00:13:29.084 --rc geninfo_unexecuted_blocks=1 00:13:29.084 00:13:29.084 ' 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:29.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.084 --rc genhtml_branch_coverage=1 00:13:29.084 --rc genhtml_function_coverage=1 00:13:29.084 --rc genhtml_legend=1 00:13:29.084 --rc geninfo_all_blocks=1 00:13:29.084 --rc geninfo_unexecuted_blocks=1 00:13:29.084 00:13:29.084 ' 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:29.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.084 --rc genhtml_branch_coverage=1 00:13:29.084 --rc genhtml_function_coverage=1 00:13:29.084 --rc genhtml_legend=1 00:13:29.084 --rc geninfo_all_blocks=1 00:13:29.084 --rc geninfo_unexecuted_blocks=1 00:13:29.084 00:13:29.084 ' 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:29.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.084 --rc genhtml_branch_coverage=1 00:13:29.084 --rc genhtml_function_coverage=1 00:13:29.084 --rc genhtml_legend=1 00:13:29.084 --rc geninfo_all_blocks=1 00:13:29.084 --rc geninfo_unexecuted_blocks=1 00:13:29.084 00:13:29.084 ' 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:13:29.084 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=91838eb1-5852-43eb-90b2-09876f360ab2 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:29.085 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:29.085 Cannot find device "nvmf_init_br" 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:29.085 Cannot find device "nvmf_init_br2" 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:29.085 Cannot find device "nvmf_tgt_br" 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:29.085 Cannot find device "nvmf_tgt_br2" 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:29.085 Cannot find device "nvmf_init_br" 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:29.085 Cannot find device "nvmf_init_br2" 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:29.085 Cannot find device "nvmf_tgt_br" 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:29.085 Cannot find device "nvmf_tgt_br2" 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:29.085 Cannot find device "nvmf_br" 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:29.085 Cannot find device "nvmf_init_if" 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:29.085 Cannot find device "nvmf_init_if2" 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:29.085 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:29.085 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:29.085 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:29.086 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:29.086 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:29.086 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:29.086 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:29.086 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:29.086 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:29.086 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:29.086 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:29.086 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:29.086 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:29.086 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:29.086 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:29.086 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:29.086 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:29.086 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:29.086 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:29.086 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:29.086 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:29.086 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:29.086 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:29.086 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:29.086 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:29.086 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:29.086 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:29.086 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:29.086 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:29.086 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:13:29.086 00:13:29.086 --- 10.0.0.3 ping statistics --- 00:13:29.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.086 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:13:29.086 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:29.086 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:29.086 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:13:29.086 00:13:29.086 --- 10.0.0.4 ping statistics --- 00:13:29.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.086 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:13:29.086 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:29.086 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:29.086 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:13:29.086 00:13:29.086 --- 10.0.0.1 ping statistics --- 00:13:29.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.086 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:13:29.086 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:29.344 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:29.344 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:13:29.344 00:13:29.344 --- 10.0.0.2 ping statistics --- 00:13:29.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.344 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:13:29.344 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:29.344 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:13:29.344 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:29.344 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:29.344 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:29.344 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:29.344 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:29.344 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:29.344 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:29.344 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:13:29.344 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:13:29.344 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:29.344 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:13:29.344 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=73420 00:13:29.344 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:29.344 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:29.344 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 73420 00:13:29.344 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 73420 ']' 00:13:29.344 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.344 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:29.344 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.344 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:29.344 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:13:29.344 [2024-11-26 19:47:24.390589] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:13:29.344 [2024-11-26 19:47:24.390654] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:29.344 [2024-11-26 19:47:24.527116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:29.345 [2024-11-26 19:47:24.563197] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:29.345 [2024-11-26 19:47:24.563238] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:29.345 [2024-11-26 19:47:24.563245] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:29.345 [2024-11-26 19:47:24.563250] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:29.345 [2024-11-26 19:47:24.563254] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:29.345 [2024-11-26 19:47:24.563944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:29.345 [2024-11-26 19:47:24.564027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:29.345 [2024-11-26 19:47:24.564108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:29.345 [2024-11-26 19:47:24.564315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.603 [2024-11-26 19:47:24.595492] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:29.603 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:29.603 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:13:29.603 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:29.603 [2024-11-26 19:47:24.825877] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:29.862 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:13:29.862 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:29.862 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:13:29.862 19:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:29.862 Malloc1 00:13:29.862 19:47:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:30.121 19:47:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:30.378 19:47:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:30.378 [2024-11-26 19:47:25.622642] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:30.637 19:47:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:13:30.637 19:47:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:13:30.637 19:47:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:13:30.637 19:47:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:13:30.637 19:47:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:30.637 19:47:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:30.637 19:47:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:30.637 19:47:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:30.637 19:47:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:13:30.637 19:47:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:30.637 19:47:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:30.637 19:47:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:30.637 19:47:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:30.637 19:47:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:13:30.637 19:47:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:13:30.637 19:47:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:13:30.637 19:47:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:30.637 19:47:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:30.637 19:47:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:13:30.637 19:47:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:30.637 19:47:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:13:30.637 19:47:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:13:30.637 19:47:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:30.637 19:47:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:13:30.896 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:30.896 fio-3.35 00:13:30.896 Starting 1 thread 00:13:33.424 00:13:33.424 test: (groupid=0, jobs=1): err= 0: pid=73490: Tue Nov 26 19:47:28 2024 00:13:33.424 read: IOPS=11.7k, BW=45.9MiB/s (48.1MB/s)(92.0MiB/2005msec) 00:13:33.424 slat (nsec): min=1901, max=282988, avg=2052.57, stdev=2436.76 00:13:33.424 clat (usec): min=2448, max=9150, avg=5694.78, stdev=834.47 00:13:33.424 lat (usec): min=2479, max=9152, avg=5696.83, stdev=834.43 00:13:33.424 clat percentiles (usec): 00:13:33.424 | 1.00th=[ 4490], 5.00th=[ 4686], 10.00th=[ 4817], 20.00th=[ 4948], 00:13:33.424 | 30.00th=[ 5080], 40.00th=[ 5211], 50.00th=[ 5342], 60.00th=[ 5669], 00:13:33.424 | 70.00th=[ 6325], 80.00th=[ 6652], 90.00th=[ 6915], 95.00th=[ 7111], 00:13:33.424 | 99.00th=[ 7439], 99.50th=[ 7504], 99.90th=[ 8029], 99.95th=[ 8291], 00:13:33.424 | 99.99th=[ 8848] 00:13:33.424 bw ( KiB/s): min=39496, max=52936, per=99.98%, avg=46976.00, stdev=6818.56, samples=4 00:13:33.424 iops : min= 9874, max=13234, avg=11744.00, stdev=1704.64, samples=4 00:13:33.424 write: IOPS=11.7k, BW=45.6MiB/s (47.8MB/s)(91.4MiB/2005msec); 0 zone resets 00:13:33.424 slat (nsec): min=1950, max=217226, avg=2106.16, stdev=1586.54 00:13:33.424 clat (usec): min=2322, max=8893, avg=5186.92, stdev=742.57 00:13:33.424 lat (usec): min=2336, max=8895, avg=5189.02, stdev=742.56 00:13:33.424 clat percentiles (usec): 00:13:33.424 | 1.00th=[ 4080], 5.00th=[ 4293], 10.00th=[ 4424], 20.00th=[ 4555], 00:13:33.424 | 30.00th=[ 4621], 40.00th=[ 4752], 50.00th=[ 4883], 60.00th=[ 5145], 00:13:33.425 | 70.00th=[ 5800], 80.00th=[ 5997], 90.00th=[ 6259], 95.00th=[ 6390], 00:13:33.425 | 99.00th=[ 6718], 99.50th=[ 6849], 99.90th=[ 7373], 99.95th=[ 8094], 00:13:33.425 | 99.99th=[ 8848] 00:13:33.425 bw ( KiB/s): min=40000, max=52664, per=99.98%, avg=46674.00, stdev=6480.01, samples=4 00:13:33.425 iops : min=10000, max=13166, avg=11668.50, stdev=1620.00, samples=4 00:13:33.425 lat (msec) : 4=0.30%, 10=99.70% 00:13:33.425 cpu : usr=78.49%, sys=16.92%, ctx=6, majf=0, minf=7 00:13:33.425 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:33.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:33.425 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:33.425 issued rwts: total=23551,23401,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:33.425 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:33.425 00:13:33.425 Run status group 0 (all jobs): 00:13:33.425 READ: bw=45.9MiB/s (48.1MB/s), 45.9MiB/s-45.9MiB/s (48.1MB/s-48.1MB/s), io=92.0MiB (96.5MB), run=2005-2005msec 00:13:33.425 WRITE: bw=45.6MiB/s (47.8MB/s), 45.6MiB/s-45.6MiB/s (47.8MB/s-47.8MB/s), io=91.4MiB (95.8MB), run=2005-2005msec 00:13:33.425 19:47:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:13:33.425 19:47:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:13:33.425 19:47:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:33.425 19:47:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:33.425 19:47:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:33.425 19:47:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:33.425 19:47:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:13:33.425 19:47:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:33.425 19:47:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:33.425 19:47:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:33.425 19:47:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:33.425 19:47:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:13:33.425 19:47:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:13:33.425 19:47:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:13:33.425 19:47:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:33.425 19:47:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:33.425 19:47:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:33.425 19:47:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:13:33.425 19:47:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:13:33.425 19:47:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:13:33.425 19:47:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:33.425 19:47:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:13:33.425 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:13:33.425 fio-3.35 00:13:33.425 Starting 1 thread 00:13:35.970 00:13:35.970 test: (groupid=0, jobs=1): err= 0: pid=73539: Tue Nov 26 19:47:30 2024 00:13:35.970 read: IOPS=10.6k, BW=166MiB/s (174MB/s)(332MiB/2002msec) 00:13:35.970 slat (usec): min=3, max=113, avg= 3.36, stdev= 1.51 00:13:35.970 clat (usec): min=1626, max=14224, avg=6500.95, stdev=2048.33 00:13:35.970 lat (usec): min=1630, max=14227, avg=6504.31, stdev=2048.43 00:13:35.970 clat percentiles (usec): 00:13:35.970 | 1.00th=[ 3195], 5.00th=[ 3720], 10.00th=[ 4113], 20.00th=[ 4621], 00:13:35.970 | 30.00th=[ 5080], 40.00th=[ 5604], 50.00th=[ 6128], 60.00th=[ 6718], 00:13:35.970 | 70.00th=[ 7439], 80.00th=[ 8586], 90.00th=[ 9503], 95.00th=[10028], 00:13:35.970 | 99.00th=[11338], 99.50th=[11994], 99.90th=[13698], 99.95th=[13960], 00:13:35.970 | 99.99th=[14091] 00:13:35.970 bw ( KiB/s): min=80608, max=92448, per=50.31%, avg=85512.00, stdev=5041.63, samples=4 00:13:35.970 iops : min= 5038, max= 5778, avg=5344.50, stdev=315.10, samples=4 00:13:35.970 write: IOPS=6188, BW=96.7MiB/s (101MB/s)(175MiB/1805msec); 0 zone resets 00:13:35.970 slat (usec): min=36, max=190, avg=37.67, stdev= 4.71 00:13:35.970 clat (usec): min=1887, max=14672, avg=9687.88, stdev=1382.99 00:13:35.970 lat (usec): min=1924, max=14709, avg=9725.55, stdev=1382.78 00:13:35.970 clat percentiles (usec): 00:13:35.970 | 1.00th=[ 6325], 5.00th=[ 7635], 10.00th=[ 8094], 20.00th=[ 8586], 00:13:35.970 | 30.00th=[ 8979], 40.00th=[ 9372], 50.00th=[ 9765], 60.00th=[10028], 00:13:35.970 | 70.00th=[10421], 80.00th=[10814], 90.00th=[11469], 95.00th=[11863], 00:13:35.970 | 99.00th=[12649], 99.50th=[13042], 99.90th=[13566], 99.95th=[13829], 00:13:35.970 | 99.99th=[14615] 00:13:35.970 bw ( KiB/s): min=83296, max=95968, per=89.54%, avg=88664.00, stdev=5523.71, samples=4 00:13:35.970 iops : min= 5206, max= 5998, avg=5541.50, stdev=345.23, samples=4 00:13:35.970 lat (msec) : 2=0.03%, 4=5.57%, 10=76.65%, 20=17.74% 00:13:35.970 cpu : usr=86.31%, sys=9.55%, ctx=5, majf=0, minf=3 00:13:35.970 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:13:35.970 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.970 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:35.970 issued rwts: total=21267,11171,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.970 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:35.970 00:13:35.970 Run status group 0 (all jobs): 00:13:35.970 READ: bw=166MiB/s (174MB/s), 166MiB/s-166MiB/s (174MB/s-174MB/s), io=332MiB (348MB), run=2002-2002msec 00:13:35.970 WRITE: bw=96.7MiB/s (101MB/s), 96.7MiB/s-96.7MiB/s (101MB/s-101MB/s), io=175MiB (183MB), run=1805-1805msec 00:13:35.970 19:47:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:35.970 19:47:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:13:35.970 19:47:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:13:35.970 19:47:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:13:35.970 19:47:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:13:35.970 19:47:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:35.970 19:47:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:13:35.970 19:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:35.970 19:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:13:35.970 19:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:35.970 19:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:35.970 rmmod nvme_tcp 00:13:35.970 rmmod nvme_fabrics 00:13:35.970 rmmod nvme_keyring 00:13:35.970 19:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:35.970 19:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:13:35.970 19:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:13:35.970 19:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 73420 ']' 00:13:35.970 19:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 73420 00:13:35.970 19:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 73420 ']' 00:13:35.970 19:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 73420 00:13:35.970 19:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:13:35.970 19:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:35.970 19:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73420 00:13:35.970 19:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:35.970 19:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:35.970 killing process with pid 73420 00:13:35.970 19:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73420' 00:13:35.970 19:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 73420 00:13:35.970 19:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 73420 00:13:36.229 19:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:36.229 19:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:36.229 19:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:36.229 19:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:13:36.229 19:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:13:36.229 19:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:36.229 19:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:13:36.229 19:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:36.229 19:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:36.229 19:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:36.229 19:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:36.229 19:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:36.229 19:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:36.229 19:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:36.229 19:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:36.229 19:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:36.229 19:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:36.229 19:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:36.229 19:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:36.229 19:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:36.229 19:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:36.229 19:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:36.229 19:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:36.229 19:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.229 19:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:36.229 19:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.229 19:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:13:36.229 00:13:36.229 real 0m7.569s 00:13:36.229 user 0m30.919s 00:13:36.229 sys 0m1.777s 00:13:36.229 19:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:36.229 19:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:13:36.229 ************************************ 00:13:36.229 END TEST nvmf_fio_host 00:13:36.229 ************************************ 00:13:36.488 19:47:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:13:36.488 19:47:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:36.488 19:47:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:36.488 19:47:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:13:36.488 ************************************ 00:13:36.488 START TEST nvmf_failover 00:13:36.488 ************************************ 00:13:36.488 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:13:36.488 * Looking for test storage... 00:13:36.488 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:13:36.488 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:36.488 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:13:36.488 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:36.488 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:36.488 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:36.488 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:36.488 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:36.488 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:13:36.488 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:13:36.488 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:13:36.488 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:13:36.488 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:13:36.488 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:13:36.488 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:13:36.488 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:36.488 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:13:36.488 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:13:36.488 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:36.488 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:36.488 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:13:36.488 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:13:36.488 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:36.488 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:13:36.488 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:13:36.488 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:13:36.488 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:13:36.488 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:36.488 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:13:36.488 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:13:36.488 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:36.488 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:36.488 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:36.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.489 --rc genhtml_branch_coverage=1 00:13:36.489 --rc genhtml_function_coverage=1 00:13:36.489 --rc genhtml_legend=1 00:13:36.489 --rc geninfo_all_blocks=1 00:13:36.489 --rc geninfo_unexecuted_blocks=1 00:13:36.489 00:13:36.489 ' 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:36.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.489 --rc genhtml_branch_coverage=1 00:13:36.489 --rc genhtml_function_coverage=1 00:13:36.489 --rc genhtml_legend=1 00:13:36.489 --rc geninfo_all_blocks=1 00:13:36.489 --rc geninfo_unexecuted_blocks=1 00:13:36.489 00:13:36.489 ' 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:36.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.489 --rc genhtml_branch_coverage=1 00:13:36.489 --rc genhtml_function_coverage=1 00:13:36.489 --rc genhtml_legend=1 00:13:36.489 --rc geninfo_all_blocks=1 00:13:36.489 --rc geninfo_unexecuted_blocks=1 00:13:36.489 00:13:36.489 ' 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:36.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.489 --rc genhtml_branch_coverage=1 00:13:36.489 --rc genhtml_function_coverage=1 00:13:36.489 --rc genhtml_legend=1 00:13:36.489 --rc geninfo_all_blocks=1 00:13:36.489 --rc geninfo_unexecuted_blocks=1 00:13:36.489 00:13:36.489 ' 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=91838eb1-5852-43eb-90b2-09876f360ab2 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:36.489 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:36.489 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:36.490 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:36.490 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:36.490 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:36.490 Cannot find device "nvmf_init_br" 00:13:36.490 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:13:36.490 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:36.490 Cannot find device "nvmf_init_br2" 00:13:36.490 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:13:36.490 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:36.490 Cannot find device "nvmf_tgt_br" 00:13:36.490 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:13:36.490 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:36.490 Cannot find device "nvmf_tgt_br2" 00:13:36.490 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:13:36.490 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:36.490 Cannot find device "nvmf_init_br" 00:13:36.490 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:13:36.490 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:36.490 Cannot find device "nvmf_init_br2" 00:13:36.490 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:13:36.490 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:36.490 Cannot find device "nvmf_tgt_br" 00:13:36.490 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:13:36.490 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:36.490 Cannot find device "nvmf_tgt_br2" 00:13:36.490 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:13:36.490 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:36.490 Cannot find device "nvmf_br" 00:13:36.490 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:13:36.747 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:36.747 Cannot find device "nvmf_init_if" 00:13:36.747 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:13:36.747 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:36.747 Cannot find device "nvmf_init_if2" 00:13:36.747 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:13:36.747 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:36.747 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:36.747 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:36.748 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:36.748 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:36.748 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.108 ms 00:13:36.748 00:13:36.748 --- 10.0.0.3 ping statistics --- 00:13:36.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:36.748 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:36.748 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:36.748 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:13:36.748 00:13:36.748 --- 10.0.0.4 ping statistics --- 00:13:36.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:36.748 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:36.748 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:36.748 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:13:36.748 00:13:36.748 --- 10.0.0.1 ping statistics --- 00:13:36.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:36.748 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:36.748 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:36.748 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:13:36.748 00:13:36.748 --- 10.0.0.2 ping statistics --- 00:13:36.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:36.748 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=73802 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 73802 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 73802 ']' 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:36.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:36.748 19:47:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:13:37.006 [2024-11-26 19:47:31.996893] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:13:37.006 [2024-11-26 19:47:31.997082] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:37.006 [2024-11-26 19:47:32.136299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:37.006 [2024-11-26 19:47:32.171826] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:37.006 [2024-11-26 19:47:32.171877] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:37.006 [2024-11-26 19:47:32.171884] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:37.006 [2024-11-26 19:47:32.171888] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:37.006 [2024-11-26 19:47:32.171894] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:37.006 [2024-11-26 19:47:32.172555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:37.006 [2024-11-26 19:47:32.172750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:37.006 [2024-11-26 19:47:32.173188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:37.006 [2024-11-26 19:47:32.203873] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:37.938 19:47:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:37.938 19:47:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:13:37.938 19:47:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:37.938 19:47:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:37.938 19:47:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:13:37.938 19:47:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:37.938 19:47:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:37.938 [2024-11-26 19:47:33.089225] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:37.938 19:47:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:13:38.196 Malloc0 00:13:38.196 19:47:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:38.454 19:47:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:38.454 19:47:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:38.712 [2024-11-26 19:47:33.758250] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:38.712 19:47:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:13:38.712 [2024-11-26 19:47:33.918321] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:13:38.712 19:47:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:13:38.969 [2024-11-26 19:47:34.122469] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:13:38.969 19:47:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=73854 00:13:38.969 19:47:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:38.969 19:47:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:13:38.969 19:47:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 73854 /var/tmp/bdevperf.sock 00:13:38.969 19:47:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 73854 ']' 00:13:38.969 19:47:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:38.969 19:47:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:38.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:38.969 19:47:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:38.969 19:47:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:38.969 19:47:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:13:39.901 19:47:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:39.901 19:47:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:13:39.901 19:47:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:13:40.158 NVMe0n1 00:13:40.158 19:47:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:13:40.416 00:13:40.416 19:47:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=73877 00:13:40.416 19:47:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:40.416 19:47:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:13:41.786 19:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:41.786 19:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:13:45.063 19:47:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:13:45.063 00:13:45.063 19:47:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:13:45.321 19:47:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:13:48.618 19:47:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:48.618 [2024-11-26 19:47:43.630700] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:48.618 19:47:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:13:49.550 19:47:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:13:49.807 19:47:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 73877 00:13:56.363 { 00:13:56.363 "results": [ 00:13:56.363 { 00:13:56.363 "job": "NVMe0n1", 00:13:56.363 "core_mask": "0x1", 00:13:56.363 "workload": "verify", 00:13:56.363 "status": "finished", 00:13:56.363 "verify_range": { 00:13:56.363 "start": 0, 00:13:56.363 "length": 16384 00:13:56.363 }, 00:13:56.363 "queue_depth": 128, 00:13:56.363 "io_size": 4096, 00:13:56.363 "runtime": 15.008475, 00:13:56.363 "iops": 10273.262273482149, 00:13:56.363 "mibps": 40.129930755789644, 00:13:56.363 "io_failed": 4085, 00:13:56.363 "io_timeout": 0, 00:13:56.363 "avg_latency_us": 12114.037339383327, 00:13:56.363 "min_latency_us": 415.90153846153845, 00:13:56.363 "max_latency_us": 18450.904615384614 00:13:56.363 } 00:13:56.363 ], 00:13:56.363 "core_count": 1 00:13:56.363 } 00:13:56.363 19:47:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 73854 00:13:56.363 19:47:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 73854 ']' 00:13:56.363 19:47:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 73854 00:13:56.363 19:47:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:13:56.363 19:47:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:56.363 19:47:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73854 00:13:56.363 killing process with pid 73854 00:13:56.363 19:47:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:56.363 19:47:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:56.363 19:47:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73854' 00:13:56.363 19:47:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 73854 00:13:56.363 19:47:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 73854 00:13:56.363 19:47:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:13:56.363 [2024-11-26 19:47:34.172515] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:13:56.364 [2024-11-26 19:47:34.172591] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73854 ] 00:13:56.364 [2024-11-26 19:47:34.305992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.364 [2024-11-26 19:47:34.342094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.364 [2024-11-26 19:47:34.373095] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:56.364 Running I/O for 15 seconds... 00:13:56.364 7827.00 IOPS, 30.57 MiB/s [2024-11-26T19:47:51.611Z] [2024-11-26 19:47:36.821565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:70744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.364 [2024-11-26 19:47:36.821618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.364 [2024-11-26 19:47:36.821638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:70752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.364 [2024-11-26 19:47:36.821647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.364 [2024-11-26 19:47:36.821658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:70760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.364 [2024-11-26 19:47:36.821667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.364 [2024-11-26 19:47:36.821677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:70768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.364 [2024-11-26 19:47:36.821686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.364 [2024-11-26 19:47:36.821697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:70776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.364 [2024-11-26 19:47:36.821705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.364 [2024-11-26 19:47:36.821715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:70784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.364 [2024-11-26 19:47:36.821724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.364 [2024-11-26 19:47:36.821734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:70792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.364 [2024-11-26 19:47:36.821743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.364 [2024-11-26 19:47:36.821753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:69800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.364 [2024-11-26 19:47:36.821761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.364 [2024-11-26 19:47:36.821783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:69808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.364 [2024-11-26 19:47:36.821791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.364 [2024-11-26 19:47:36.821802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:69816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.364 [2024-11-26 19:47:36.821810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.364 [2024-11-26 19:47:36.821821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.364 [2024-11-26 19:47:36.821855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.364 [2024-11-26 19:47:36.821867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.364 [2024-11-26 19:47:36.821875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.364 [2024-11-26 19:47:36.821886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.364 [2024-11-26 19:47:36.821894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.364 [2024-11-26 19:47:36.821905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:69848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.364 [2024-11-26 19:47:36.821913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.364 [2024-11-26 19:47:36.821924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:70800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.364 [2024-11-26 19:47:36.821932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.364 [2024-11-26 19:47:36.821948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:69856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.364 [2024-11-26 19:47:36.821956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.364 [2024-11-26 19:47:36.821967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:69864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.364 [2024-11-26 19:47:36.821975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.364 [2024-11-26 19:47:36.821986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:69872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.364 [2024-11-26 19:47:36.821994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.364 [2024-11-26 19:47:36.822004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.364 [2024-11-26 19:47:36.822013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.364 [2024-11-26 19:47:36.822024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:69888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.364 [2024-11-26 19:47:36.822032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.364 [2024-11-26 19:47:36.822042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:69896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.364 [2024-11-26 19:47:36.822052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.364 [2024-11-26 19:47:36.822063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:69904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.364 [2024-11-26 19:47:36.822071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.364 [2024-11-26 19:47:36.822082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:69912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.364 [2024-11-26 19:47:36.822091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.364 [2024-11-26 19:47:36.822107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:70808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.364 [2024-11-26 19:47:36.822115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.364 [2024-11-26 19:47:36.822126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:69920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.364 [2024-11-26 19:47:36.822134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.364 [2024-11-26 19:47:36.822144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:69928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.364 [2024-11-26 19:47:36.822153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.364 [2024-11-26 19:47:36.822164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:69936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.364 [2024-11-26 19:47:36.822172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.364 [2024-11-26 19:47:36.822183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:69944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.364 [2024-11-26 19:47:36.822191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.364 [2024-11-26 19:47:36.822201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:69952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.365 [2024-11-26 19:47:36.822209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.365 [2024-11-26 19:47:36.822220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:69960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.365 [2024-11-26 19:47:36.822228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.365 [2024-11-26 19:47:36.822238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:69968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.365 [2024-11-26 19:47:36.822247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.365 [2024-11-26 19:47:36.822260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:70816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.365 [2024-11-26 19:47:36.822268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.365 [2024-11-26 19:47:36.822279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:69976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.365 [2024-11-26 19:47:36.822287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.365 [2024-11-26 19:47:36.822298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:69984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.365 [2024-11-26 19:47:36.822307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.365 [2024-11-26 19:47:36.822317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.365 [2024-11-26 19:47:36.822326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.365 [2024-11-26 19:47:36.822336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:70000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.365 [2024-11-26 19:47:36.822348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.365 [2024-11-26 19:47:36.822360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:70008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.365 [2024-11-26 19:47:36.822368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.365 [2024-11-26 19:47:36.822378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:70016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.365 [2024-11-26 19:47:36.822387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.365 [2024-11-26 19:47:36.822397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:70024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.365 [2024-11-26 19:47:36.822406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.365 [2024-11-26 19:47:36.822416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:70032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.365 [2024-11-26 19:47:36.822425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.365 [2024-11-26 19:47:36.822435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:70040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.365 [2024-11-26 19:47:36.822443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.365 [2024-11-26 19:47:36.822454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:70048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.365 [2024-11-26 19:47:36.822462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.365 [2024-11-26 19:47:36.822473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:70056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.365 [2024-11-26 19:47:36.822481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.365 [2024-11-26 19:47:36.822492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:70064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.365 [2024-11-26 19:47:36.822500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.365 [2024-11-26 19:47:36.822512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:70072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.365 [2024-11-26 19:47:36.822521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.365 [2024-11-26 19:47:36.822531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:70080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.365 [2024-11-26 19:47:36.822540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.365 [2024-11-26 19:47:36.822550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:70088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.365 [2024-11-26 19:47:36.822559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.365 [2024-11-26 19:47:36.822571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:70096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.365 [2024-11-26 19:47:36.822580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.365 [2024-11-26 19:47:36.822590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:70104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.365 [2024-11-26 19:47:36.822602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.365 [2024-11-26 19:47:36.822612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:70112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.365 [2024-11-26 19:47:36.822621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.365 [2024-11-26 19:47:36.822631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:70120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.365 [2024-11-26 19:47:36.822640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.365 [2024-11-26 19:47:36.822650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:70128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.365 [2024-11-26 19:47:36.822658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.365 [2024-11-26 19:47:36.822669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:70136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.365 [2024-11-26 19:47:36.822677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.365 [2024-11-26 19:47:36.822688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:70144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.365 [2024-11-26 19:47:36.822696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.365 [2024-11-26 19:47:36.822706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:70152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.365 [2024-11-26 19:47:36.822714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.365 [2024-11-26 19:47:36.822724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:70160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.365 [2024-11-26 19:47:36.822733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.365 [2024-11-26 19:47:36.822743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:70168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.365 [2024-11-26 19:47:36.822752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.365 [2024-11-26 19:47:36.822762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:70176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.365 [2024-11-26 19:47:36.822778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.365 [2024-11-26 19:47:36.822789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:70184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.365 [2024-11-26 19:47:36.822797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.365 [2024-11-26 19:47:36.822808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:70192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.365 [2024-11-26 19:47:36.822816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.365 [2024-11-26 19:47:36.822827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:70200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.365 [2024-11-26 19:47:36.822835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.366 [2024-11-26 19:47:36.822849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:70208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.366 [2024-11-26 19:47:36.822858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.366 [2024-11-26 19:47:36.822868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:70216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.366 [2024-11-26 19:47:36.822877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.366 [2024-11-26 19:47:36.822888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:70224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.366 [2024-11-26 19:47:36.822898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.366 [2024-11-26 19:47:36.822908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:70232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.366 [2024-11-26 19:47:36.822917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.366 [2024-11-26 19:47:36.822927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.366 [2024-11-26 19:47:36.822935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.366 [2024-11-26 19:47:36.822946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:70248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.366 [2024-11-26 19:47:36.822954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.366 [2024-11-26 19:47:36.822981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:70256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.366 [2024-11-26 19:47:36.822990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.366 [2024-11-26 19:47:36.823000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:70264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.366 [2024-11-26 19:47:36.823009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.366 [2024-11-26 19:47:36.823020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:70272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.366 [2024-11-26 19:47:36.823029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.366 [2024-11-26 19:47:36.823039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:70280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.366 [2024-11-26 19:47:36.823047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.366 [2024-11-26 19:47:36.823058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:70288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.366 [2024-11-26 19:47:36.823066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.366 [2024-11-26 19:47:36.823077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:70296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.366 [2024-11-26 19:47:36.823085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.366 [2024-11-26 19:47:36.823096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.366 [2024-11-26 19:47:36.823109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.366 [2024-11-26 19:47:36.823120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:70312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.366 [2024-11-26 19:47:36.823128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.366 [2024-11-26 19:47:36.823138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.366 [2024-11-26 19:47:36.823147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.366 [2024-11-26 19:47:36.823157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:70328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.366 [2024-11-26 19:47:36.823166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.366 [2024-11-26 19:47:36.823176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:70336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.366 [2024-11-26 19:47:36.823185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.366 [2024-11-26 19:47:36.823196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:70344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.366 [2024-11-26 19:47:36.823204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.366 [2024-11-26 19:47:36.823216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:70352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.366 [2024-11-26 19:47:36.823224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.366 [2024-11-26 19:47:36.823235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:70360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.366 [2024-11-26 19:47:36.823243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.366 [2024-11-26 19:47:36.823254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:70368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.366 [2024-11-26 19:47:36.823263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.366 [2024-11-26 19:47:36.823273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:70376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.366 [2024-11-26 19:47:36.823281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.366 [2024-11-26 19:47:36.823292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:70384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.366 [2024-11-26 19:47:36.823301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.366 [2024-11-26 19:47:36.823311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:70392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.366 [2024-11-26 19:47:36.823320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.366 [2024-11-26 19:47:36.823330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.366 [2024-11-26 19:47:36.823339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.366 [2024-11-26 19:47:36.823353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:70408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.366 [2024-11-26 19:47:36.823362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.366 [2024-11-26 19:47:36.823373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.366 [2024-11-26 19:47:36.823381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.366 [2024-11-26 19:47:36.823391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:70424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.366 [2024-11-26 19:47:36.823400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.366 [2024-11-26 19:47:36.823410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:70432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.366 [2024-11-26 19:47:36.823419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.366 [2024-11-26 19:47:36.823430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:70440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.366 [2024-11-26 19:47:36.823438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.366 [2024-11-26 19:47:36.823448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:70448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.367 [2024-11-26 19:47:36.823457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.367 [2024-11-26 19:47:36.823467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:70456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.367 [2024-11-26 19:47:36.823476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.367 [2024-11-26 19:47:36.823487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:70464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.367 [2024-11-26 19:47:36.823495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.367 [2024-11-26 19:47:36.823505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:70472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.367 [2024-11-26 19:47:36.823518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.367 [2024-11-26 19:47:36.823530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:70480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.367 [2024-11-26 19:47:36.823539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.367 [2024-11-26 19:47:36.823549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:70488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.367 [2024-11-26 19:47:36.823558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.367 [2024-11-26 19:47:36.823568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:70496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.367 [2024-11-26 19:47:36.823577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.367 [2024-11-26 19:47:36.823587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:70504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.367 [2024-11-26 19:47:36.823599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.367 [2024-11-26 19:47:36.823610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:70512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.367 [2024-11-26 19:47:36.823619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.367 [2024-11-26 19:47:36.823629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:70520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.367 [2024-11-26 19:47:36.823637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.367 [2024-11-26 19:47:36.823648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:70528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.367 [2024-11-26 19:47:36.823656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.367 [2024-11-26 19:47:36.823667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:70536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.367 [2024-11-26 19:47:36.823675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.367 [2024-11-26 19:47:36.823685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:70544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.367 [2024-11-26 19:47:36.823694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.367 [2024-11-26 19:47:36.823704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:70552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.367 [2024-11-26 19:47:36.823712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.367 [2024-11-26 19:47:36.823723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:70560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.367 [2024-11-26 19:47:36.823732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.367 [2024-11-26 19:47:36.823742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.367 [2024-11-26 19:47:36.823750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.367 [2024-11-26 19:47:36.823761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:70576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.367 [2024-11-26 19:47:36.823776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.367 [2024-11-26 19:47:36.823787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:70584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.367 [2024-11-26 19:47:36.823795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.367 [2024-11-26 19:47:36.823806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:70592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.367 [2024-11-26 19:47:36.823814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.367 [2024-11-26 19:47:36.823824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:70600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.367 [2024-11-26 19:47:36.823833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.367 [2024-11-26 19:47:36.823845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:70608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.367 [2024-11-26 19:47:36.823858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.367 [2024-11-26 19:47:36.823869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:70616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.367 [2024-11-26 19:47:36.823878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.367 [2024-11-26 19:47:36.823888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:70624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.367 [2024-11-26 19:47:36.823896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.367 [2024-11-26 19:47:36.823907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:70632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.367 [2024-11-26 19:47:36.823915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.367 [2024-11-26 19:47:36.823926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:70640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.367 [2024-11-26 19:47:36.823934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.367 [2024-11-26 19:47:36.823944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:70648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.367 [2024-11-26 19:47:36.823953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.367 [2024-11-26 19:47:36.823963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:70656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.367 [2024-11-26 19:47:36.823971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.367 [2024-11-26 19:47:36.823982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:70664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.367 [2024-11-26 19:47:36.823990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.367 [2024-11-26 19:47:36.824001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.367 [2024-11-26 19:47:36.824009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.367 [2024-11-26 19:47:36.824019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:70680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.367 [2024-11-26 19:47:36.824027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.367 [2024-11-26 19:47:36.824037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.367 [2024-11-26 19:47:36.824046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.367 [2024-11-26 19:47:36.824057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:70696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.367 [2024-11-26 19:47:36.824065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.367 [2024-11-26 19:47:36.824075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:70704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.367 [2024-11-26 19:47:36.824084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.368 [2024-11-26 19:47:36.824098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:70712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.368 [2024-11-26 19:47:36.824106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.368 [2024-11-26 19:47:36.824117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:70720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.368 [2024-11-26 19:47:36.824125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.368 [2024-11-26 19:47:36.824136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:70728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.368 [2024-11-26 19:47:36.824144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.368 [2024-11-26 19:47:36.824156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d98fe0 is same with the state(6) to be set 00:13:56.368 [2024-11-26 19:47:36.824166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:13:56.368 [2024-11-26 19:47:36.824173] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:13:56.368 [2024-11-26 19:47:36.824180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70736 len:8 PRP1 0x0 PRP2 0x0 00:13:56.368 [2024-11-26 19:47:36.824188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.368 [2024-11-26 19:47:36.824233] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:13:56.368 [2024-11-26 19:47:36.824272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:56.368 [2024-11-26 19:47:36.824283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.368 [2024-11-26 19:47:36.824293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:56.368 [2024-11-26 19:47:36.824301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.368 [2024-11-26 19:47:36.824311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:56.368 [2024-11-26 19:47:36.824319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.368 [2024-11-26 19:47:36.824329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:56.368 [2024-11-26 19:47:36.824337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.368 [2024-11-26 19:47:36.824345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:13:56.368 [2024-11-26 19:47:36.827640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:13:56.368 [2024-11-26 19:47:36.827668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d29c60 (9): Bad file descriptor 00:13:56.368 [2024-11-26 19:47:36.857613] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:13:56.368 8742.00 IOPS, 34.15 MiB/s [2024-11-26T19:47:51.615Z] 9257.33 IOPS, 36.16 MiB/s [2024-11-26T19:47:51.615Z] 9843.00 IOPS, 38.45 MiB/s [2024-11-26T19:47:51.615Z] [2024-11-26 19:47:40.405773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:117512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.368 [2024-11-26 19:47:40.405826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.368 [2024-11-26 19:47:40.405860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:117520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.368 [2024-11-26 19:47:40.405868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.368 [2024-11-26 19:47:40.405877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:117528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.368 [2024-11-26 19:47:40.405884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.368 [2024-11-26 19:47:40.405893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:117536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.368 [2024-11-26 19:47:40.405900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.368 [2024-11-26 19:47:40.405909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:117544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.368 [2024-11-26 19:47:40.405916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.368 [2024-11-26 19:47:40.405925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:117552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.368 [2024-11-26 19:47:40.405932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.368 [2024-11-26 19:47:40.405940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:117560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.368 [2024-11-26 19:47:40.405947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.368 [2024-11-26 19:47:40.405955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:117568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.368 [2024-11-26 19:47:40.405962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.368 [2024-11-26 19:47:40.405971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:117000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.368 [2024-11-26 19:47:40.405978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.368 [2024-11-26 19:47:40.405986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:117008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.368 [2024-11-26 19:47:40.405993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.368 [2024-11-26 19:47:40.406001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:117016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.368 [2024-11-26 19:47:40.406009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.368 [2024-11-26 19:47:40.406017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:117024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.368 [2024-11-26 19:47:40.406024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.368 [2024-11-26 19:47:40.406033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:117032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.368 [2024-11-26 19:47:40.406040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.368 [2024-11-26 19:47:40.406048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:117040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.368 [2024-11-26 19:47:40.406059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.368 [2024-11-26 19:47:40.406068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:117048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.368 [2024-11-26 19:47:40.406075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.368 [2024-11-26 19:47:40.406083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:117056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.368 [2024-11-26 19:47:40.406090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.368 [2024-11-26 19:47:40.406098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:117064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.368 [2024-11-26 19:47:40.406105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.368 [2024-11-26 19:47:40.406115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:117072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.368 [2024-11-26 19:47:40.406123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.368 [2024-11-26 19:47:40.406131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:117080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.368 [2024-11-26 19:47:40.406138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.369 [2024-11-26 19:47:40.406147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:117088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.369 [2024-11-26 19:47:40.406154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.369 [2024-11-26 19:47:40.406162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:117096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.369 [2024-11-26 19:47:40.406169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.369 [2024-11-26 19:47:40.406178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:117104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.369 [2024-11-26 19:47:40.406185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.369 [2024-11-26 19:47:40.406193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:117112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.369 [2024-11-26 19:47:40.406200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.369 [2024-11-26 19:47:40.406208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:117120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.369 [2024-11-26 19:47:40.406215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.369 [2024-11-26 19:47:40.406224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:117128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.369 [2024-11-26 19:47:40.406230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.369 [2024-11-26 19:47:40.406239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:117136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.369 [2024-11-26 19:47:40.406247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.369 [2024-11-26 19:47:40.406259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:117144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.369 [2024-11-26 19:47:40.406266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.369 [2024-11-26 19:47:40.406274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:117152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.369 [2024-11-26 19:47:40.406281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.369 [2024-11-26 19:47:40.406290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:117160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.369 [2024-11-26 19:47:40.406296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.369 [2024-11-26 19:47:40.406304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:117168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.369 [2024-11-26 19:47:40.406311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.369 [2024-11-26 19:47:40.406320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:117176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.369 [2024-11-26 19:47:40.406326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.369 [2024-11-26 19:47:40.406335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:117184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.369 [2024-11-26 19:47:40.406342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.369 [2024-11-26 19:47:40.406350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:117576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.369 [2024-11-26 19:47:40.406357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.369 [2024-11-26 19:47:40.406366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:117584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.369 [2024-11-26 19:47:40.406374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.369 [2024-11-26 19:47:40.406383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:117592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.369 [2024-11-26 19:47:40.406390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.369 [2024-11-26 19:47:40.406399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:117600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.369 [2024-11-26 19:47:40.406406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.369 [2024-11-26 19:47:40.406415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:117608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.369 [2024-11-26 19:47:40.406422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.369 [2024-11-26 19:47:40.406430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:117616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.369 [2024-11-26 19:47:40.406437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.369 [2024-11-26 19:47:40.406446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:117624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.369 [2024-11-26 19:47:40.406456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.369 [2024-11-26 19:47:40.406464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:117632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.369 [2024-11-26 19:47:40.406471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.369 [2024-11-26 19:47:40.406480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:117640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.369 [2024-11-26 19:47:40.406487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.369 [2024-11-26 19:47:40.406495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:117648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.369 [2024-11-26 19:47:40.406502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.369 [2024-11-26 19:47:40.406511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:117656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.369 [2024-11-26 19:47:40.406518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.370 [2024-11-26 19:47:40.406527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:117664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.370 [2024-11-26 19:47:40.406534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.370 [2024-11-26 19:47:40.406542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:117672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.370 [2024-11-26 19:47:40.406549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.370 [2024-11-26 19:47:40.406558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:117680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.370 [2024-11-26 19:47:40.406565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.370 [2024-11-26 19:47:40.406573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:117688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.370 [2024-11-26 19:47:40.406580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.370 [2024-11-26 19:47:40.406589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:117696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.370 [2024-11-26 19:47:40.406596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.370 [2024-11-26 19:47:40.406605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:117704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.370 [2024-11-26 19:47:40.406612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.370 [2024-11-26 19:47:40.406620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:117712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.370 [2024-11-26 19:47:40.406627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.370 [2024-11-26 19:47:40.406636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:117720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.370 [2024-11-26 19:47:40.406643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.370 [2024-11-26 19:47:40.406654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:117728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.370 [2024-11-26 19:47:40.406661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.370 [2024-11-26 19:47:40.406670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:117736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.370 [2024-11-26 19:47:40.406677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.370 [2024-11-26 19:47:40.406685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:117744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.370 [2024-11-26 19:47:40.406692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.370 [2024-11-26 19:47:40.406701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:117752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.370 [2024-11-26 19:47:40.406708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.370 [2024-11-26 19:47:40.406717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:117760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.370 [2024-11-26 19:47:40.406724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.370 [2024-11-26 19:47:40.406732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:117192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.370 [2024-11-26 19:47:40.406739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.370 [2024-11-26 19:47:40.406748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:117200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.370 [2024-11-26 19:47:40.406755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.370 [2024-11-26 19:47:40.406763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:117208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.370 [2024-11-26 19:47:40.406778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.370 [2024-11-26 19:47:40.406787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:117216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.370 [2024-11-26 19:47:40.406793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.370 [2024-11-26 19:47:40.406802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:117224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.370 [2024-11-26 19:47:40.406809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.370 [2024-11-26 19:47:40.406817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:117232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.370 [2024-11-26 19:47:40.406824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.370 [2024-11-26 19:47:40.406833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:117240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.370 [2024-11-26 19:47:40.406839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.370 [2024-11-26 19:47:40.406848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:117248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.370 [2024-11-26 19:47:40.406855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.370 [2024-11-26 19:47:40.406867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:117768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.370 [2024-11-26 19:47:40.406874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.370 [2024-11-26 19:47:40.406883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:117776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.370 [2024-11-26 19:47:40.406890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.370 [2024-11-26 19:47:40.406898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:117784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.370 [2024-11-26 19:47:40.406905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.370 [2024-11-26 19:47:40.406914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:117792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.370 [2024-11-26 19:47:40.406921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.370 [2024-11-26 19:47:40.406929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:117800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.370 [2024-11-26 19:47:40.406936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.370 [2024-11-26 19:47:40.406944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:117808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.370 [2024-11-26 19:47:40.406952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.370 [2024-11-26 19:47:40.406967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:117816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.370 [2024-11-26 19:47:40.406974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.370 [2024-11-26 19:47:40.406983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:117824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.370 [2024-11-26 19:47:40.406990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.370 [2024-11-26 19:47:40.406999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.370 [2024-11-26 19:47:40.407005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.370 [2024-11-26 19:47:40.407014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:117840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.370 [2024-11-26 19:47:40.407021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.371 [2024-11-26 19:47:40.407030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:117848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.371 [2024-11-26 19:47:40.407036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.371 [2024-11-26 19:47:40.407045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:117856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.371 [2024-11-26 19:47:40.407052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.371 [2024-11-26 19:47:40.407060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:117864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.371 [2024-11-26 19:47:40.407073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.371 [2024-11-26 19:47:40.407082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.371 [2024-11-26 19:47:40.407089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.371 [2024-11-26 19:47:40.407097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:117880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.371 [2024-11-26 19:47:40.407104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.371 [2024-11-26 19:47:40.407113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:117888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.371 [2024-11-26 19:47:40.407120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.371 [2024-11-26 19:47:40.407129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:117256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.371 [2024-11-26 19:47:40.407136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.371 [2024-11-26 19:47:40.407144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:117264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.371 [2024-11-26 19:47:40.407151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.371 [2024-11-26 19:47:40.407160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:117272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.371 [2024-11-26 19:47:40.407167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.371 [2024-11-26 19:47:40.407175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:117280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.371 [2024-11-26 19:47:40.407182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.371 [2024-11-26 19:47:40.407191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:117288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.371 [2024-11-26 19:47:40.407198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.371 [2024-11-26 19:47:40.407207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:117296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.371 [2024-11-26 19:47:40.407214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.371 [2024-11-26 19:47:40.407223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:117304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.371 [2024-11-26 19:47:40.407229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.371 [2024-11-26 19:47:40.407238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:117312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.371 [2024-11-26 19:47:40.407245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.371 [2024-11-26 19:47:40.407254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:117320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.371 [2024-11-26 19:47:40.407260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.371 [2024-11-26 19:47:40.407272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:117328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.371 [2024-11-26 19:47:40.407279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.371 [2024-11-26 19:47:40.407288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:117336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.371 [2024-11-26 19:47:40.407295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.371 [2024-11-26 19:47:40.407303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:117344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.371 [2024-11-26 19:47:40.407310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.371 [2024-11-26 19:47:40.407319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:117352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.371 [2024-11-26 19:47:40.407326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.371 [2024-11-26 19:47:40.407334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:117360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.371 [2024-11-26 19:47:40.407341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.371 [2024-11-26 19:47:40.407350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:117368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.371 [2024-11-26 19:47:40.407361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.371 [2024-11-26 19:47:40.407370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:117376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.371 [2024-11-26 19:47:40.407377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.371 [2024-11-26 19:47:40.407386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:117896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.371 [2024-11-26 19:47:40.407393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.371 [2024-11-26 19:47:40.407401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:117904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.371 [2024-11-26 19:47:40.407408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.371 [2024-11-26 19:47:40.407417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:117912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.371 [2024-11-26 19:47:40.407424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.371 [2024-11-26 19:47:40.407432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:117920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.371 [2024-11-26 19:47:40.407439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.371 [2024-11-26 19:47:40.407448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:117928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.371 [2024-11-26 19:47:40.407455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.371 [2024-11-26 19:47:40.407464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:117936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.371 [2024-11-26 19:47:40.407474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.371 [2024-11-26 19:47:40.407483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:117944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.371 [2024-11-26 19:47:40.407489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.371 [2024-11-26 19:47:40.407498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:117952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.371 [2024-11-26 19:47:40.407505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.371 [2024-11-26 19:47:40.407513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:117384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.371 [2024-11-26 19:47:40.407520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.372 [2024-11-26 19:47:40.407528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:117392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.372 [2024-11-26 19:47:40.407535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.372 [2024-11-26 19:47:40.407544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:117400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.372 [2024-11-26 19:47:40.407551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.372 [2024-11-26 19:47:40.407559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:117408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.372 [2024-11-26 19:47:40.407567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.372 [2024-11-26 19:47:40.407575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:117416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.372 [2024-11-26 19:47:40.407582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.372 [2024-11-26 19:47:40.407591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:117424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.372 [2024-11-26 19:47:40.407598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.372 [2024-11-26 19:47:40.407607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:117432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.372 [2024-11-26 19:47:40.407614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.372 [2024-11-26 19:47:40.407622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:117440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.372 [2024-11-26 19:47:40.407633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.372 [2024-11-26 19:47:40.407642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:117448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.372 [2024-11-26 19:47:40.407649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.372 [2024-11-26 19:47:40.407658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:117456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.372 [2024-11-26 19:47:40.407665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.372 [2024-11-26 19:47:40.407676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:117464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.372 [2024-11-26 19:47:40.407684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.372 [2024-11-26 19:47:40.407692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:117472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.372 [2024-11-26 19:47:40.407699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.372 [2024-11-26 19:47:40.407708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:117480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.372 [2024-11-26 19:47:40.407715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.372 [2024-11-26 19:47:40.407723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:117488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.372 [2024-11-26 19:47:40.407730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.372 [2024-11-26 19:47:40.407738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:117496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.372 [2024-11-26 19:47:40.407746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.372 [2024-11-26 19:47:40.407754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9d370 is same with the state(6) to be set 00:13:56.372 [2024-11-26 19:47:40.407763] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:13:56.372 [2024-11-26 19:47:40.407774] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:13:56.372 [2024-11-26 19:47:40.407780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117504 len:8 PRP1 0x0 PRP2 0x0 00:13:56.372 [2024-11-26 19:47:40.407787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.372 [2024-11-26 19:47:40.407795] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:13:56.372 [2024-11-26 19:47:40.407800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:13:56.372 [2024-11-26 19:47:40.407805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117960 len:8 PRP1 0x0 PRP2 0x0 00:13:56.372 [2024-11-26 19:47:40.407812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.372 [2024-11-26 19:47:40.407819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:13:56.372 [2024-11-26 19:47:40.407823] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:13:56.372 [2024-11-26 19:47:40.407829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117968 len:8 PRP1 0x0 PRP2 0x0 00:13:56.372 [2024-11-26 19:47:40.407836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.372 [2024-11-26 19:47:40.407843] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:13:56.372 [2024-11-26 19:47:40.407848] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:13:56.372 [2024-11-26 19:47:40.407854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117976 len:8 PRP1 0x0 PRP2 0x0 00:13:56.372 [2024-11-26 19:47:40.407861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.372 [2024-11-26 19:47:40.407868] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:13:56.372 [2024-11-26 19:47:40.407873] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:13:56.372 [2024-11-26 19:47:40.407882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117984 len:8 PRP1 0x0 PRP2 0x0 00:13:56.372 [2024-11-26 19:47:40.407889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.372 [2024-11-26 19:47:40.407897] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:13:56.372 [2024-11-26 19:47:40.407901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:13:56.372 [2024-11-26 19:47:40.407906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117992 len:8 PRP1 0x0 PRP2 0x0 00:13:56.372 [2024-11-26 19:47:40.407913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.372 [2024-11-26 19:47:40.407921] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:13:56.372 [2024-11-26 19:47:40.407925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:13:56.372 [2024-11-26 19:47:40.407930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118000 len:8 PRP1 0x0 PRP2 0x0 00:13:56.372 [2024-11-26 19:47:40.407937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.372 [2024-11-26 19:47:40.407944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:13:56.372 [2024-11-26 19:47:40.407949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:13:56.372 [2024-11-26 19:47:40.407954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118008 len:8 PRP1 0x0 PRP2 0x0 00:13:56.372 [2024-11-26 19:47:40.407961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.372 [2024-11-26 19:47:40.407968] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:13:56.372 [2024-11-26 19:47:40.407973] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:13:56.372 [2024-11-26 19:47:40.407978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118016 len:8 PRP1 0x0 PRP2 0x0 00:13:56.372 [2024-11-26 19:47:40.407985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.372 [2024-11-26 19:47:40.408018] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:13:56.372 [2024-11-26 19:47:40.408052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:56.372 [2024-11-26 19:47:40.408061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.372 [2024-11-26 19:47:40.408069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:56.372 [2024-11-26 19:47:40.408076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.373 [2024-11-26 19:47:40.408083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:56.373 [2024-11-26 19:47:40.408090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.373 [2024-11-26 19:47:40.408098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:56.373 [2024-11-26 19:47:40.408105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.373 [2024-11-26 19:47:40.408112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:13:56.373 [2024-11-26 19:47:40.410787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:13:56.373 [2024-11-26 19:47:40.410814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d29c60 (9): Bad file descriptor 00:13:56.373 [2024-11-26 19:47:40.434225] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:13:56.373 9768.20 IOPS, 38.16 MiB/s [2024-11-26T19:47:51.620Z] 9830.83 IOPS, 38.40 MiB/s [2024-11-26T19:47:51.620Z] 9866.43 IOPS, 38.54 MiB/s [2024-11-26T19:47:51.620Z] 9897.12 IOPS, 38.66 MiB/s [2024-11-26T19:47:51.620Z] 9911.22 IOPS, 38.72 MiB/s [2024-11-26T19:47:51.620Z] [2024-11-26 19:47:44.860185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:92360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.373 [2024-11-26 19:47:44.860230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.373 [2024-11-26 19:47:44.860245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:92368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.373 [2024-11-26 19:47:44.860253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.373 [2024-11-26 19:47:44.860262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:92376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.373 [2024-11-26 19:47:44.860269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.373 [2024-11-26 19:47:44.860278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.373 [2024-11-26 19:47:44.860285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.373 [2024-11-26 19:47:44.860293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:92392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.373 [2024-11-26 19:47:44.860300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.373 [2024-11-26 19:47:44.860309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:92400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.373 [2024-11-26 19:47:44.860316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.373 [2024-11-26 19:47:44.860325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:92408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.373 [2024-11-26 19:47:44.860331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.373 [2024-11-26 19:47:44.860340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:92416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.373 [2024-11-26 19:47:44.860347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.373 [2024-11-26 19:47:44.860356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:92424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.373 [2024-11-26 19:47:44.860363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.373 [2024-11-26 19:47:44.860371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:92432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.373 [2024-11-26 19:47:44.860378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.373 [2024-11-26 19:47:44.860386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:92440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.373 [2024-11-26 19:47:44.860393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.373 [2024-11-26 19:47:44.860422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:92448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.373 [2024-11-26 19:47:44.860429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.373 [2024-11-26 19:47:44.860438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:92456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.373 [2024-11-26 19:47:44.860445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.373 [2024-11-26 19:47:44.860453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:92464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.373 [2024-11-26 19:47:44.860460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.373 [2024-11-26 19:47:44.860468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.373 [2024-11-26 19:47:44.860475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.373 [2024-11-26 19:47:44.860485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:92800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.373 [2024-11-26 19:47:44.860492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.373 [2024-11-26 19:47:44.860500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:92808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.373 [2024-11-26 19:47:44.860507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.373 [2024-11-26 19:47:44.860517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:92816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.373 [2024-11-26 19:47:44.860524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.373 [2024-11-26 19:47:44.860533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:92824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.373 [2024-11-26 19:47:44.860541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.373 [2024-11-26 19:47:44.860550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:92832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.373 [2024-11-26 19:47:44.860557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.373 [2024-11-26 19:47:44.860566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:92840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.373 [2024-11-26 19:47:44.860573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.373 [2024-11-26 19:47:44.860582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.373 [2024-11-26 19:47:44.860590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.373 [2024-11-26 19:47:44.860598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:92856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.373 [2024-11-26 19:47:44.860605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.373 [2024-11-26 19:47:44.860613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.373 [2024-11-26 19:47:44.860625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.373 [2024-11-26 19:47:44.860634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:92872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.373 [2024-11-26 19:47:44.860641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.373 [2024-11-26 19:47:44.860650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:92880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.373 [2024-11-26 19:47:44.860657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.373 [2024-11-26 19:47:44.860666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:92888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.373 [2024-11-26 19:47:44.860673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.373 [2024-11-26 19:47:44.860681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:92896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.373 [2024-11-26 19:47:44.860688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.373 [2024-11-26 19:47:44.860697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:92904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.374 [2024-11-26 19:47:44.860704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.374 [2024-11-26 19:47:44.860713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:92912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.374 [2024-11-26 19:47:44.860719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.374 [2024-11-26 19:47:44.860728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:92920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.374 [2024-11-26 19:47:44.860735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.374 [2024-11-26 19:47:44.860743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:92928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.374 [2024-11-26 19:47:44.860750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.374 [2024-11-26 19:47:44.860758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:92936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.374 [2024-11-26 19:47:44.860774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.374 [2024-11-26 19:47:44.860784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:92944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.374 [2024-11-26 19:47:44.860792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.374 [2024-11-26 19:47:44.860800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:92952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.374 [2024-11-26 19:47:44.860807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.374 [2024-11-26 19:47:44.860816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.374 [2024-11-26 19:47:44.860823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.374 [2024-11-26 19:47:44.860835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:92968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.374 [2024-11-26 19:47:44.860843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.374 [2024-11-26 19:47:44.860852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:92976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.374 [2024-11-26 19:47:44.860859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.374 [2024-11-26 19:47:44.860868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:92472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.374 [2024-11-26 19:47:44.860875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.374 [2024-11-26 19:47:44.860884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:92480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.374 [2024-11-26 19:47:44.860891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.374 [2024-11-26 19:47:44.860900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:92488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.374 [2024-11-26 19:47:44.860906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.374 [2024-11-26 19:47:44.860915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:92496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.374 [2024-11-26 19:47:44.860922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.374 [2024-11-26 19:47:44.860931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:92504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.374 [2024-11-26 19:47:44.860938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.374 [2024-11-26 19:47:44.860946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:92512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.374 [2024-11-26 19:47:44.860954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.374 [2024-11-26 19:47:44.860963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:92520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.374 [2024-11-26 19:47:44.860970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.374 [2024-11-26 19:47:44.860978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:92528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.374 [2024-11-26 19:47:44.860985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.374 [2024-11-26 19:47:44.860994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:92984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.374 [2024-11-26 19:47:44.861001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.374 [2024-11-26 19:47:44.861010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:92992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.374 [2024-11-26 19:47:44.861017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.374 [2024-11-26 19:47:44.861025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:93000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.374 [2024-11-26 19:47:44.861032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.374 [2024-11-26 19:47:44.861046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:93008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.374 [2024-11-26 19:47:44.861054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.374 [2024-11-26 19:47:44.861062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:93016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.374 [2024-11-26 19:47:44.861069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.374 [2024-11-26 19:47:44.861078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:93024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.374 [2024-11-26 19:47:44.861085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.374 [2024-11-26 19:47:44.861093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:93032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.374 [2024-11-26 19:47:44.861100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.374 [2024-11-26 19:47:44.861109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:93040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.374 [2024-11-26 19:47:44.861117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.374 [2024-11-26 19:47:44.861125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:93048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.374 [2024-11-26 19:47:44.861132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.374 [2024-11-26 19:47:44.861141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:93056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.374 [2024-11-26 19:47:44.861148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.374 [2024-11-26 19:47:44.861157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:93064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.374 [2024-11-26 19:47:44.861164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.374 [2024-11-26 19:47:44.861172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:93072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.374 [2024-11-26 19:47:44.861179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.375 [2024-11-26 19:47:44.861188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:93080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.375 [2024-11-26 19:47:44.861195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.375 [2024-11-26 19:47:44.861203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:93088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.375 [2024-11-26 19:47:44.861210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.375 [2024-11-26 19:47:44.861219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:93096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.375 [2024-11-26 19:47:44.861226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.375 [2024-11-26 19:47:44.861234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:93104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.375 [2024-11-26 19:47:44.861244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.375 [2024-11-26 19:47:44.861253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:92536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.375 [2024-11-26 19:47:44.861260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.375 [2024-11-26 19:47:44.861270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:92544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.375 [2024-11-26 19:47:44.861277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.375 [2024-11-26 19:47:44.861286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:92552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.375 [2024-11-26 19:47:44.861293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.375 [2024-11-26 19:47:44.861302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:92560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.375 [2024-11-26 19:47:44.861309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.375 [2024-11-26 19:47:44.861317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:92568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.375 [2024-11-26 19:47:44.861325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.375 [2024-11-26 19:47:44.861333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:92576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.375 [2024-11-26 19:47:44.861340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.375 [2024-11-26 19:47:44.861349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:92584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.375 [2024-11-26 19:47:44.861356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.375 [2024-11-26 19:47:44.861364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:92592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.375 [2024-11-26 19:47:44.861372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.375 [2024-11-26 19:47:44.861380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:93112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.375 [2024-11-26 19:47:44.861388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.375 [2024-11-26 19:47:44.861396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:93120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.375 [2024-11-26 19:47:44.861403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.375 [2024-11-26 19:47:44.861412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:93128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.375 [2024-11-26 19:47:44.861419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.375 [2024-11-26 19:47:44.861428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.375 [2024-11-26 19:47:44.861435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.375 [2024-11-26 19:47:44.861447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:93144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.375 [2024-11-26 19:47:44.861454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.375 [2024-11-26 19:47:44.861462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:93152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.375 [2024-11-26 19:47:44.861469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.375 [2024-11-26 19:47:44.861478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:93160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.375 [2024-11-26 19:47:44.861485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.375 [2024-11-26 19:47:44.861494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:93168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.375 [2024-11-26 19:47:44.861501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.375 [2024-11-26 19:47:44.861509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.375 [2024-11-26 19:47:44.861517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.375 [2024-11-26 19:47:44.861525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:93184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.375 [2024-11-26 19:47:44.861532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.375 [2024-11-26 19:47:44.861541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:93192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.375 [2024-11-26 19:47:44.861548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.375 [2024-11-26 19:47:44.861556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:93200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.375 [2024-11-26 19:47:44.861563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.375 [2024-11-26 19:47:44.861572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:93208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.375 [2024-11-26 19:47:44.861579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.375 [2024-11-26 19:47:44.861587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:93216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.375 [2024-11-26 19:47:44.861594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.375 [2024-11-26 19:47:44.861603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:93224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.375 [2024-11-26 19:47:44.861610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.375 [2024-11-26 19:47:44.861618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:93232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.375 [2024-11-26 19:47:44.861625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.375 [2024-11-26 19:47:44.861634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:93240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.376 [2024-11-26 19:47:44.861644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.376 [2024-11-26 19:47:44.861652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:93248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.376 [2024-11-26 19:47:44.861659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.376 [2024-11-26 19:47:44.861668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:93256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.376 [2024-11-26 19:47:44.861675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.376 [2024-11-26 19:47:44.861683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:93264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.376 [2024-11-26 19:47:44.861691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.376 [2024-11-26 19:47:44.861699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:92600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.376 [2024-11-26 19:47:44.861706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.376 [2024-11-26 19:47:44.861715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:92608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.376 [2024-11-26 19:47:44.861721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.376 [2024-11-26 19:47:44.861730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:92616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.376 [2024-11-26 19:47:44.861737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.376 [2024-11-26 19:47:44.861745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:92624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.376 [2024-11-26 19:47:44.861752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.376 [2024-11-26 19:47:44.861761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:92632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.376 [2024-11-26 19:47:44.861782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.376 [2024-11-26 19:47:44.861791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:92640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.376 [2024-11-26 19:47:44.861799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.376 [2024-11-26 19:47:44.861808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:92648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.376 [2024-11-26 19:47:44.861815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.376 [2024-11-26 19:47:44.861824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:92656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.376 [2024-11-26 19:47:44.861831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.376 [2024-11-26 19:47:44.861840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.376 [2024-11-26 19:47:44.861847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.376 [2024-11-26 19:47:44.861856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:92672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.376 [2024-11-26 19:47:44.861867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.376 [2024-11-26 19:47:44.861876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:92680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.376 [2024-11-26 19:47:44.861883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.376 [2024-11-26 19:47:44.861892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:92688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.376 [2024-11-26 19:47:44.861898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.376 [2024-11-26 19:47:44.861907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:92696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.376 [2024-11-26 19:47:44.861914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.376 [2024-11-26 19:47:44.861923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:92704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.376 [2024-11-26 19:47:44.861930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.376 [2024-11-26 19:47:44.861938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:92712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.376 [2024-11-26 19:47:44.861946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.376 [2024-11-26 19:47:44.861954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:92720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.376 [2024-11-26 19:47:44.861961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.376 [2024-11-26 19:47:44.861970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:93272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.376 [2024-11-26 19:47:44.861976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.376 [2024-11-26 19:47:44.861985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.376 [2024-11-26 19:47:44.861992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.376 [2024-11-26 19:47:44.862001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:93288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.376 [2024-11-26 19:47:44.862008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.376 [2024-11-26 19:47:44.862016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:93296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.376 [2024-11-26 19:47:44.862023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.376 [2024-11-26 19:47:44.862032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:93304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.376 [2024-11-26 19:47:44.862039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.376 [2024-11-26 19:47:44.862048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:93312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.376 [2024-11-26 19:47:44.862055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.376 [2024-11-26 19:47:44.862067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:93320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.376 [2024-11-26 19:47:44.862075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.376 [2024-11-26 19:47:44.862083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:93328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.376 [2024-11-26 19:47:44.862091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.376 [2024-11-26 19:47:44.862099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:93336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.376 [2024-11-26 19:47:44.862106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.376 [2024-11-26 19:47:44.862115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:93344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.376 [2024-11-26 19:47:44.862123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.376 [2024-11-26 19:47:44.862131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:93352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.376 [2024-11-26 19:47:44.862139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.377 [2024-11-26 19:47:44.862147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:93360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.377 [2024-11-26 19:47:44.862154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.377 [2024-11-26 19:47:44.862163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:93368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.377 [2024-11-26 19:47:44.862170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.377 [2024-11-26 19:47:44.862179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:93376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:13:56.377 [2024-11-26 19:47:44.862185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.377 [2024-11-26 19:47:44.862194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:92728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.377 [2024-11-26 19:47:44.862201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.377 [2024-11-26 19:47:44.862210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:92736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.377 [2024-11-26 19:47:44.862217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.377 [2024-11-26 19:47:44.862226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:92744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.377 [2024-11-26 19:47:44.862233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.377 [2024-11-26 19:47:44.862242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.377 [2024-11-26 19:47:44.862249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.377 [2024-11-26 19:47:44.862257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:92760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.377 [2024-11-26 19:47:44.862270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.377 [2024-11-26 19:47:44.862278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:92768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.377 [2024-11-26 19:47:44.862285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.377 [2024-11-26 19:47:44.862294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:92776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:56.377 [2024-11-26 19:47:44.862307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.377 [2024-11-26 19:47:44.862338] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:13:56.377 [2024-11-26 19:47:44.862345] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:13:56.377 [2024-11-26 19:47:44.862350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92784 len:8 PRP1 0x0 PRP2 0x0 00:13:56.377 [2024-11-26 19:47:44.862357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.377 [2024-11-26 19:47:44.862392] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:13:56.377 [2024-11-26 19:47:44.862424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:56.377 [2024-11-26 19:47:44.862433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.377 [2024-11-26 19:47:44.862442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:56.377 [2024-11-26 19:47:44.862449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.377 [2024-11-26 19:47:44.862457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:56.377 [2024-11-26 19:47:44.862464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.377 [2024-11-26 19:47:44.862471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:56.377 [2024-11-26 19:47:44.862479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:56.377 [2024-11-26 19:47:44.862486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:13:56.377 [2024-11-26 19:47:44.865154] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:13:56.377 [2024-11-26 19:47:44.865180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d29c60 (9): Bad file descriptor 00:13:56.377 [2024-11-26 19:47:44.887295] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:13:56.377 10170.60 IOPS, 39.73 MiB/s [2024-11-26T19:47:51.624Z] 10362.36 IOPS, 40.48 MiB/s [2024-11-26T19:47:51.624Z] 10321.50 IOPS, 40.32 MiB/s [2024-11-26T19:47:51.624Z] 10314.62 IOPS, 40.29 MiB/s [2024-11-26T19:47:51.624Z] 10288.71 IOPS, 40.19 MiB/s [2024-11-26T19:47:51.624Z] 10273.20 IOPS, 40.13 MiB/s 00:13:56.377 Latency(us) 00:13:56.377 [2024-11-26T19:47:51.624Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:56.377 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:56.377 Verification LBA range: start 0x0 length 0x4000 00:13:56.377 NVMe0n1 : 15.01 10273.26 40.13 272.18 0.00 12114.04 415.90 18450.90 00:13:56.377 [2024-11-26T19:47:51.624Z] =================================================================================================================== 00:13:56.377 [2024-11-26T19:47:51.624Z] Total : 10273.26 40.13 272.18 0.00 12114.04 415.90 18450.90 00:13:56.377 Received shutdown signal, test time was about 15.000000 seconds 00:13:56.377 00:13:56.377 Latency(us) 00:13:56.377 [2024-11-26T19:47:51.624Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:56.377 [2024-11-26T19:47:51.624Z] =================================================================================================================== 00:13:56.377 [2024-11-26T19:47:51.624Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:56.377 19:47:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:13:56.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:56.377 19:47:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:13:56.377 19:47:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:13:56.377 19:47:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=74055 00:13:56.377 19:47:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:13:56.377 19:47:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 74055 /var/tmp/bdevperf.sock 00:13:56.377 19:47:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 74055 ']' 00:13:56.377 19:47:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:56.377 19:47:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:56.377 19:47:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:56.377 19:47:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:56.377 19:47:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:13:56.635 19:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:56.635 19:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:13:56.635 19:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:13:56.892 [2024-11-26 19:47:51.932325] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:13:56.892 19:47:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:13:57.149 [2024-11-26 19:47:52.139178] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:13:57.149 19:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:13:57.406 NVMe0n1 00:13:57.406 19:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:13:57.662 00:13:57.662 19:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:13:57.918 00:13:57.918 19:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:13:57.919 19:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:13:58.175 19:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:13:58.175 19:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:14:01.477 19:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:01.477 19:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:14:01.477 19:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=74133 00:14:01.477 19:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 74133 00:14:01.477 19:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:02.851 { 00:14:02.851 "results": [ 00:14:02.851 { 00:14:02.851 "job": "NVMe0n1", 00:14:02.851 "core_mask": "0x1", 00:14:02.851 "workload": "verify", 00:14:02.851 "status": "finished", 00:14:02.851 "verify_range": { 00:14:02.851 "start": 0, 00:14:02.851 "length": 16384 00:14:02.851 }, 00:14:02.851 "queue_depth": 128, 00:14:02.851 "io_size": 4096, 00:14:02.851 "runtime": 1.00674, 00:14:02.851 "iops": 9778.095635417287, 00:14:02.851 "mibps": 38.195686075848776, 00:14:02.851 "io_failed": 0, 00:14:02.851 "io_timeout": 0, 00:14:02.851 "avg_latency_us": 13027.247150314131, 00:14:02.851 "min_latency_us": 1064.96, 00:14:02.851 "max_latency_us": 14922.043076923077 00:14:02.851 } 00:14:02.851 ], 00:14:02.851 "core_count": 1 00:14:02.851 } 00:14:02.851 19:47:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:14:02.851 [2024-11-26 19:47:50.921399] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:14:02.851 [2024-11-26 19:47:50.921502] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74055 ] 00:14:02.851 [2024-11-26 19:47:51.052852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.851 [2024-11-26 19:47:51.085097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.851 [2024-11-26 19:47:51.114044] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:02.851 [2024-11-26 19:47:53.354209] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:14:02.851 [2024-11-26 19:47:53.354305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:02.851 [2024-11-26 19:47:53.354318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.851 [2024-11-26 19:47:53.354328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:02.851 [2024-11-26 19:47:53.354335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.851 [2024-11-26 19:47:53.354342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:02.851 [2024-11-26 19:47:53.354349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.851 [2024-11-26 19:47:53.354357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:02.851 [2024-11-26 19:47:53.354364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.851 [2024-11-26 19:47:53.354371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:14:02.851 [2024-11-26 19:47:53.354398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:14:02.851 [2024-11-26 19:47:53.354414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x167cc60 (9): Bad file descriptor 00:14:02.851 [2024-11-26 19:47:53.359972] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:14:02.851 Running I/O for 1 seconds... 00:14:02.851 9716.00 IOPS, 37.95 MiB/s 00:14:02.851 Latency(us) 00:14:02.851 [2024-11-26T19:47:58.098Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:02.851 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:02.851 Verification LBA range: start 0x0 length 0x4000 00:14:02.851 NVMe0n1 : 1.01 9778.10 38.20 0.00 0.00 13027.25 1064.96 14922.04 00:14:02.851 [2024-11-26T19:47:58.098Z] =================================================================================================================== 00:14:02.851 [2024-11-26T19:47:58.098Z] Total : 9778.10 38.20 0.00 0.00 13027.25 1064.96 14922.04 00:14:02.851 19:47:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:02.851 19:47:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:14:02.851 19:47:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:03.109 19:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:03.109 19:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:14:03.368 19:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:03.368 19:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:14:06.646 19:48:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:06.646 19:48:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:14:06.646 19:48:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 74055 00:14:06.646 19:48:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 74055 ']' 00:14:06.646 19:48:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 74055 00:14:06.646 19:48:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:14:06.646 19:48:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:06.646 19:48:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74055 00:14:06.646 killing process with pid 74055 00:14:06.646 19:48:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:06.646 19:48:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:06.646 19:48:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74055' 00:14:06.646 19:48:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 74055 00:14:06.646 19:48:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 74055 00:14:06.942 19:48:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:14:06.942 19:48:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:06.942 19:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:14:06.942 19:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:14:06.942 19:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:14:06.942 19:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:06.942 19:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:14:07.226 19:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:07.226 19:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:14:07.226 19:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:07.226 19:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:07.226 rmmod nvme_tcp 00:14:07.226 rmmod nvme_fabrics 00:14:07.226 rmmod nvme_keyring 00:14:07.226 19:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:07.226 19:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:14:07.226 19:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:14:07.226 19:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 73802 ']' 00:14:07.226 19:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 73802 00:14:07.226 19:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 73802 ']' 00:14:07.226 19:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 73802 00:14:07.226 19:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:14:07.226 19:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:07.226 19:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73802 00:14:07.226 killing process with pid 73802 00:14:07.226 19:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:07.226 19:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:07.226 19:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73802' 00:14:07.226 19:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 73802 00:14:07.226 19:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 73802 00:14:07.226 19:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:07.226 19:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:07.226 19:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:07.226 19:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:14:07.226 19:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:14:07.226 19:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:07.226 19:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:14:07.226 19:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:07.226 19:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:07.226 19:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:07.486 19:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:07.486 19:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:07.486 19:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:07.486 19:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:07.486 19:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:07.486 19:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:07.486 19:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:07.486 19:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:07.486 19:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:07.486 19:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:07.486 19:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:07.486 19:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:07.486 19:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:07.486 19:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.486 19:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:07.486 19:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.486 19:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:14:07.486 00:14:07.486 real 0m31.183s 00:14:07.486 user 2m0.749s 00:14:07.486 sys 0m4.276s 00:14:07.486 ************************************ 00:14:07.486 END TEST nvmf_failover 00:14:07.486 ************************************ 00:14:07.486 19:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:07.486 19:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:07.486 19:48:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:14:07.486 19:48:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:07.486 19:48:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:07.486 19:48:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:07.486 ************************************ 00:14:07.486 START TEST nvmf_host_discovery 00:14:07.486 ************************************ 00:14:07.486 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:14:07.747 * Looking for test storage... 00:14:07.747 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:07.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.747 --rc genhtml_branch_coverage=1 00:14:07.747 --rc genhtml_function_coverage=1 00:14:07.747 --rc genhtml_legend=1 00:14:07.747 --rc geninfo_all_blocks=1 00:14:07.747 --rc geninfo_unexecuted_blocks=1 00:14:07.747 00:14:07.747 ' 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:07.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.747 --rc genhtml_branch_coverage=1 00:14:07.747 --rc genhtml_function_coverage=1 00:14:07.747 --rc genhtml_legend=1 00:14:07.747 --rc geninfo_all_blocks=1 00:14:07.747 --rc geninfo_unexecuted_blocks=1 00:14:07.747 00:14:07.747 ' 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:07.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.747 --rc genhtml_branch_coverage=1 00:14:07.747 --rc genhtml_function_coverage=1 00:14:07.747 --rc genhtml_legend=1 00:14:07.747 --rc geninfo_all_blocks=1 00:14:07.747 --rc geninfo_unexecuted_blocks=1 00:14:07.747 00:14:07.747 ' 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:07.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.747 --rc genhtml_branch_coverage=1 00:14:07.747 --rc genhtml_function_coverage=1 00:14:07.747 --rc genhtml_legend=1 00:14:07.747 --rc geninfo_all_blocks=1 00:14:07.747 --rc geninfo_unexecuted_blocks=1 00:14:07.747 00:14:07.747 ' 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=91838eb1-5852-43eb-90b2-09876f360ab2 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:07.747 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:07.748 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:07.748 Cannot find device "nvmf_init_br" 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:07.748 Cannot find device "nvmf_init_br2" 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:07.748 Cannot find device "nvmf_tgt_br" 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:07.748 Cannot find device "nvmf_tgt_br2" 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:07.748 Cannot find device "nvmf_init_br" 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:07.748 Cannot find device "nvmf_init_br2" 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:07.748 Cannot find device "nvmf_tgt_br" 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:07.748 Cannot find device "nvmf_tgt_br2" 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:07.748 Cannot find device "nvmf_br" 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:14:07.748 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:08.005 Cannot find device "nvmf_init_if" 00:14:08.005 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:14:08.005 19:48:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:08.005 Cannot find device "nvmf_init_if2" 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:08.005 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:08.005 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:08.005 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:08.005 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:14:08.005 00:14:08.005 --- 10.0.0.3 ping statistics --- 00:14:08.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:08.005 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:08.005 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:08.005 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:14:08.005 00:14:08.005 --- 10.0.0.4 ping statistics --- 00:14:08.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:08.005 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:08.005 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:08.005 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:14:08.005 00:14:08.005 --- 10.0.0.1 ping statistics --- 00:14:08.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:08.005 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:08.005 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:08.005 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:14:08.005 00:14:08.005 --- 10.0.0.2 ping statistics --- 00:14:08.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:08.005 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=74455 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 74455 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 74455 ']' 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:08.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:08.005 19:48:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:08.005 [2024-11-26 19:48:03.222275] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:14:08.005 [2024-11-26 19:48:03.222337] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:08.302 [2024-11-26 19:48:03.367259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.302 [2024-11-26 19:48:03.402297] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:08.302 [2024-11-26 19:48:03.402337] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:08.302 [2024-11-26 19:48:03.402343] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:08.302 [2024-11-26 19:48:03.402348] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:08.302 [2024-11-26 19:48:03.402352] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:08.302 [2024-11-26 19:48:03.402613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:08.302 [2024-11-26 19:48:03.434063] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:08.868 19:48:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:08.868 19:48:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:14:08.868 19:48:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:08.868 19:48:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:08.868 19:48:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:08.868 19:48:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:08.868 19:48:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:08.868 19:48:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.868 19:48:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:08.868 [2024-11-26 19:48:04.100323] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:08.868 19:48:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.868 19:48:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:14:08.868 19:48:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.868 19:48:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:08.868 [2024-11-26 19:48:04.108398] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:14:08.868 19:48:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.868 19:48:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:14:08.868 19:48:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.868 19:48:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:09.126 null0 00:14:09.126 19:48:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.126 19:48:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:14:09.126 19:48:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.126 19:48:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:09.126 null1 00:14:09.126 19:48:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.126 19:48:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:14:09.126 19:48:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.126 19:48:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:09.126 19:48:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.126 19:48:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=74482 00:14:09.126 19:48:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 74482 /tmp/host.sock 00:14:09.126 19:48:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 74482 ']' 00:14:09.126 19:48:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:14:09.126 19:48:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:14:09.126 19:48:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:09.126 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:14:09.126 19:48:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:14:09.126 19:48:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:09.126 19:48:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:09.126 [2024-11-26 19:48:04.171080] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:14:09.126 [2024-11-26 19:48:04.171137] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74482 ] 00:14:09.126 [2024-11-26 19:48:04.305542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.126 [2024-11-26 19:48:04.337953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.126 [2024-11-26 19:48:04.367718] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:10.063 19:48:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:10.064 19:48:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:14:10.064 19:48:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:10.064 19:48:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:14:10.064 19:48:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.064 19:48:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:10.064 [2024-11-26 19:48:05.240649] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.064 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:10.322 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:14:10.322 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.322 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:14:10.322 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:14:10.322 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:14:10.322 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:10.322 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:14:10.323 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.323 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:10.323 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.323 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:14:10.323 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:14:10.323 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:10.323 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:10.323 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:14:10.323 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:14:10.323 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:10.323 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:10.323 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.323 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:10.323 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:10.323 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:10.323 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.323 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:14:10.323 19:48:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:14:10.889 [2024-11-26 19:48:06.013005] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:14:10.889 [2024-11-26 19:48:06.013033] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:14:10.889 [2024-11-26 19:48:06.013048] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:14:10.889 [2024-11-26 19:48:06.019039] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:14:10.889 [2024-11-26 19:48:06.073323] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:14:10.889 [2024-11-26 19:48:06.074028] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x225de60:1 started. 00:14:10.889 [2024-11-26 19:48:06.075353] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:14:10.889 [2024-11-26 19:48:06.075372] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:14:10.889 [2024-11-26 19:48:06.081733] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x225de60 was disconnected and freed. delete nvme_qpair. 00:14:11.147 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:11.147 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:14:11.147 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:14:11.147 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:11.147 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:11.147 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:11.147 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:11.147 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.147 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:11.407 [2024-11-26 19:48:06.534523] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x226c2f0:1 started. 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:11.407 [2024-11-26 19:48:06.541893] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x226c2f0 was disconnected and freed. delete nvme_qpair. 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:11.407 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:14:11.408 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:14:11.408 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:14:11.408 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.408 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:11.408 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:14:11.408 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.408 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:14:11.408 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:14:11.408 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:14:11.408 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:11.408 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:14:11.408 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.408 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:11.408 [2024-11-26 19:48:06.597915] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:14:11.408 [2024-11-26 19:48:06.598677] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:14:11.408 [2024-11-26 19:48:06.598700] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:14:11.408 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.408 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:14:11.408 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:14:11.408 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:11.408 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:11.408 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:14:11.408 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:14:11.408 [2024-11-26 19:48:06.604675] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:14:11.408 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:11.408 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:11.408 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.408 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:11.408 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:11.408 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:11.408 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.408 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:11.408 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:11.408 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:14:11.408 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:14:11.408 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:11.408 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:11.408 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:14:11.408 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:14:11.408 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:11.408 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:11.408 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:11.408 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:11.408 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.408 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:11.667 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.667 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:14:11.667 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:11.667 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:14:11.668 [2024-11-26 19:48:06.666962] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:14:11.668 [2024-11-26 19:48:06.667006] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:14:11.668 [2024-11-26 19:48:06.667013] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:14:11.668 [2024-11-26 19:48:06.667016] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:11.668 [2024-11-26 19:48:06.738776] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:14:11.668 [2024-11-26 19:48:06.738801] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:14:11.668 [2024-11-26 19:48:06.743399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:11.668 [2024-11-26 19:48:06.743427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:11.668 [2024-11-26 19:48:06.743434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:11.668 [2024-11-26 19:48:06.743440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:11.668 [2024-11-26 19:48:06.743445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:11.668 [2024-11-26 19:48:06.743449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:11.668 [2024-11-26 19:48:06.743455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:11.668 [2024-11-26 19:48:06.743459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:11.668 [2024-11-26 19:48:06.743465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223a240 is same with the state(6) to be set 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:14:11.668 [2024-11-26 19:48:06.744787] bdev_nvme.c:7271:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:14:11.668 [2024-11-26 19:48:06.744807] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:14:11.668 [2024-11-26 19:48:06.744844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x223a240 (9): Bad file descriptor 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:14:11.668 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:14:11.669 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.669 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:14:11.669 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:11.669 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.669 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:14:11.669 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:11.669 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:14:11.669 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:14:11.669 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:14:11.669 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:14:11.669 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:11.669 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:11.669 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:14:11.669 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:14:11.669 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:14:11.669 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.669 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:14:11.669 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:11.669 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.669 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:14:11.669 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:14:11.669 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:14:11.669 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:11.669 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:14:11.669 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.669 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:11.669 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.669 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:14:11.669 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:14:11.669 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:11.669 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:11.669 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:14:11.669 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:14:11.669 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:11.669 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:11.669 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.669 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:14:11.669 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:11.669 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:14:11.928 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.928 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:14:11.928 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:11.928 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:14:11.928 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:14:11.928 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:11.928 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:11.928 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:14:11.928 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:14:11.928 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:11.928 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:11.928 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.928 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:11.928 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:11.928 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:11.928 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.928 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:14:11.928 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:11.928 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:14:11.928 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:14:11.928 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:14:11.928 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:14:11.928 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:14:11.928 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:14:11.928 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:14:11.928 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:14:11.928 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:14:11.928 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:14:11.928 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.928 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:11.928 19:48:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.928 19:48:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:14:11.928 19:48:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:14:11.928 19:48:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:14:11.928 19:48:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:14:11.928 19:48:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:11.928 19:48:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.928 19:48:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:12.903 [2024-11-26 19:48:08.023421] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:14:12.903 [2024-11-26 19:48:08.023450] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:14:12.903 [2024-11-26 19:48:08.023460] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:14:12.903 [2024-11-26 19:48:08.029446] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:14:12.903 [2024-11-26 19:48:08.087717] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:14:12.903 [2024-11-26 19:48:08.088313] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x2245c30:1 started. 00:14:12.903 [2024-11-26 19:48:08.089838] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:14:12.903 [2024-11-26 19:48:08.089869] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:14:12.903 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.903 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:12.903 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:14:12.903 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:12.903 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:12.903 [2024-11-26 19:48:08.092391] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x2245c30 was disconnected and freed. delete nvme_qpair. 00:14:12.903 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:12.903 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:12.903 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:12.903 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:12.903 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.903 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:12.903 request: 00:14:12.903 { 00:14:12.903 "name": "nvme", 00:14:12.903 "trtype": "tcp", 00:14:12.903 "traddr": "10.0.0.3", 00:14:12.903 "adrfam": "ipv4", 00:14:12.903 "trsvcid": "8009", 00:14:12.903 "hostnqn": "nqn.2021-12.io.spdk:test", 00:14:12.903 "wait_for_attach": true, 00:14:12.903 "method": "bdev_nvme_start_discovery", 00:14:12.903 "req_id": 1 00:14:12.903 } 00:14:12.903 Got JSON-RPC error response 00:14:12.903 response: 00:14:12.903 { 00:14:12.903 "code": -17, 00:14:12.903 "message": "File exists" 00:14:12.903 } 00:14:12.903 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:12.903 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:14:12.903 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:12.904 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:12.904 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:12.904 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:14:12.904 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:14:12.904 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:14:12.904 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:14:12.904 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.904 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:12.904 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:14:12.904 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.904 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:14:12.904 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:14:12.904 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:12.904 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.904 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:12.904 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:12.904 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:12.904 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:13.163 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.163 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:14:13.163 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:13.163 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:14:13.163 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:13.163 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:13.163 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:13.163 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:13.163 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:13.163 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:13.163 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.163 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:13.163 request: 00:14:13.163 { 00:14:13.163 "name": "nvme_second", 00:14:13.163 "trtype": "tcp", 00:14:13.163 "traddr": "10.0.0.3", 00:14:13.163 "adrfam": "ipv4", 00:14:13.163 "trsvcid": "8009", 00:14:13.163 "hostnqn": "nqn.2021-12.io.spdk:test", 00:14:13.163 "wait_for_attach": true, 00:14:13.163 "method": "bdev_nvme_start_discovery", 00:14:13.163 "req_id": 1 00:14:13.163 } 00:14:13.163 Got JSON-RPC error response 00:14:13.163 response: 00:14:13.163 { 00:14:13.163 "code": -17, 00:14:13.163 "message": "File exists" 00:14:13.163 } 00:14:13.163 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:13.163 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:14:13.163 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:13.163 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:13.163 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:13.163 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:14:13.163 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:14:13.163 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:14:13.163 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.163 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:13.163 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:14:13.163 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:14:13.163 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.163 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:14:13.163 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:14:13.163 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:13.163 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.163 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:14:13.163 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:13.163 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:13.163 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:14:13.163 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.163 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:14:13.163 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:14:13.163 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:14:13.163 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:14:13.163 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:13.163 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:13.163 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:13.163 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:13.163 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:14:13.163 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.163 19:48:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:14.095 [2024-11-26 19:48:09.266677] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:14:14.095 [2024-11-26 19:48:09.266728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2239f40 with addr=10.0.0.3, port=8010 00:14:14.095 [2024-11-26 19:48:09.266742] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:14:14.095 [2024-11-26 19:48:09.266748] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:14:14.095 [2024-11-26 19:48:09.266754] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:14:15.028 [2024-11-26 19:48:10.266671] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:14:15.028 [2024-11-26 19:48:10.266712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2239f40 with addr=10.0.0.3, port=8010 00:14:15.028 [2024-11-26 19:48:10.266725] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:14:15.028 [2024-11-26 19:48:10.266730] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:14:15.028 [2024-11-26 19:48:10.266735] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:14:16.401 [2024-11-26 19:48:11.266585] bdev_nvme.c:7527:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:14:16.402 request: 00:14:16.402 { 00:14:16.402 "name": "nvme_second", 00:14:16.402 "trtype": "tcp", 00:14:16.402 "traddr": "10.0.0.3", 00:14:16.402 "adrfam": "ipv4", 00:14:16.402 "trsvcid": "8010", 00:14:16.402 "hostnqn": "nqn.2021-12.io.spdk:test", 00:14:16.402 "wait_for_attach": false, 00:14:16.402 "attach_timeout_ms": 3000, 00:14:16.402 "method": "bdev_nvme_start_discovery", 00:14:16.402 "req_id": 1 00:14:16.402 } 00:14:16.402 Got JSON-RPC error response 00:14:16.402 response: 00:14:16.402 { 00:14:16.402 "code": -110, 00:14:16.402 "message": "Connection timed out" 00:14:16.402 } 00:14:16.402 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:16.402 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:14:16.402 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:16.402 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:16.402 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:16.402 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:14:16.402 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:14:16.402 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:14:16.402 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.402 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:14:16.402 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:16.402 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:14:16.402 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.402 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:14:16.402 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:14:16.402 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 74482 00:14:16.402 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:14:16.402 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:16.402 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:14:16.402 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:16.402 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:14:16.402 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:16.402 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:16.402 rmmod nvme_tcp 00:14:16.402 rmmod nvme_fabrics 00:14:16.402 rmmod nvme_keyring 00:14:16.402 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:16.402 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:14:16.402 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:14:16.402 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 74455 ']' 00:14:16.402 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 74455 00:14:16.402 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 74455 ']' 00:14:16.402 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 74455 00:14:16.402 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:14:16.402 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:16.402 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74455 00:14:16.402 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:16.402 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:16.402 killing process with pid 74455 00:14:16.402 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74455' 00:14:16.402 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 74455 00:14:16.402 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 74455 00:14:16.402 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:16.402 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:16.402 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:16.402 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:14:16.402 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:14:16.402 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:16.402 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:14:16.402 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:16.402 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:16.402 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:16.402 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:16.402 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:16.402 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:16.402 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:16.402 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:16.402 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:16.402 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:16.402 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:16.661 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:16.661 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:16.661 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:16.661 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:16.661 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:16.661 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:16.661 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:16.661 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:16.661 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:14:16.661 00:14:16.661 real 0m9.031s 00:14:16.661 user 0m16.552s 00:14:16.661 sys 0m1.500s 00:14:16.661 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:16.661 ************************************ 00:14:16.661 END TEST nvmf_host_discovery 00:14:16.661 ************************************ 00:14:16.661 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:16.661 19:48:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:14:16.661 19:48:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:16.661 19:48:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:16.661 19:48:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:16.661 ************************************ 00:14:16.661 START TEST nvmf_host_multipath_status 00:14:16.661 ************************************ 00:14:16.661 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:14:16.661 * Looking for test storage... 00:14:16.661 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:16.661 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:16.661 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:14:16.661 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:16.661 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:16.661 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:16.661 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:16.661 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:16.661 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:14:16.661 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:14:16.661 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:14:16.661 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:14:16.661 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:14:16.661 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:14:16.661 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:14:16.661 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:16.661 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:14:16.661 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:14:16.661 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:16.661 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:16.661 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:14:16.661 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:14:16.661 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:16.661 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:14:16.661 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:14:16.661 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:14:16.661 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:14:16.661 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:16.661 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:14:16.661 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:14:16.661 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:16.661 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:16.661 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:14:16.661 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:16.661 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:16.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.661 --rc genhtml_branch_coverage=1 00:14:16.661 --rc genhtml_function_coverage=1 00:14:16.661 --rc genhtml_legend=1 00:14:16.661 --rc geninfo_all_blocks=1 00:14:16.661 --rc geninfo_unexecuted_blocks=1 00:14:16.661 00:14:16.661 ' 00:14:16.661 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:16.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.661 --rc genhtml_branch_coverage=1 00:14:16.661 --rc genhtml_function_coverage=1 00:14:16.661 --rc genhtml_legend=1 00:14:16.661 --rc geninfo_all_blocks=1 00:14:16.661 --rc geninfo_unexecuted_blocks=1 00:14:16.661 00:14:16.661 ' 00:14:16.661 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:16.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.661 --rc genhtml_branch_coverage=1 00:14:16.661 --rc genhtml_function_coverage=1 00:14:16.661 --rc genhtml_legend=1 00:14:16.661 --rc geninfo_all_blocks=1 00:14:16.661 --rc geninfo_unexecuted_blocks=1 00:14:16.661 00:14:16.661 ' 00:14:16.661 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:16.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.661 --rc genhtml_branch_coverage=1 00:14:16.661 --rc genhtml_function_coverage=1 00:14:16.661 --rc genhtml_legend=1 00:14:16.661 --rc geninfo_all_blocks=1 00:14:16.661 --rc geninfo_unexecuted_blocks=1 00:14:16.661 00:14:16.661 ' 00:14:16.661 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:16.920 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:14:16.920 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:16.920 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:16.920 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:16.920 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:16.920 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:16.920 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:16.920 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:16.920 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:16.920 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:16.920 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:16.920 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:14:16.920 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=91838eb1-5852-43eb-90b2-09876f360ab2 00:14:16.920 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:16.920 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:16.920 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:16.920 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:16.920 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:16.920 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:14:16.920 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:16.920 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:16.920 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:16.920 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.920 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.920 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.920 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:16.921 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:16.921 Cannot find device "nvmf_init_br" 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:16.921 Cannot find device "nvmf_init_br2" 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:16.921 Cannot find device "nvmf_tgt_br" 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:16.921 Cannot find device "nvmf_tgt_br2" 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:16.921 Cannot find device "nvmf_init_br" 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:16.921 Cannot find device "nvmf_init_br2" 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:16.921 Cannot find device "nvmf_tgt_br" 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:16.921 Cannot find device "nvmf_tgt_br2" 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:14:16.921 19:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:16.921 Cannot find device "nvmf_br" 00:14:16.921 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:14:16.921 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:16.921 Cannot find device "nvmf_init_if" 00:14:16.921 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:14:16.921 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:16.921 Cannot find device "nvmf_init_if2" 00:14:16.921 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:14:16.921 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:16.921 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:16.921 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:14:16.921 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:16.921 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:16.921 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:14:16.921 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:16.921 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:16.921 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:16.921 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:16.921 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:16.921 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:16.921 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:16.921 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:16.921 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:16.921 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:16.921 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:16.921 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:16.921 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:16.921 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:16.921 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:16.921 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:16.921 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:16.921 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:16.921 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:16.921 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:16.921 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:16.922 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:16.922 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:16.922 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:16.922 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:17.180 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:17.180 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:17.180 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:17.180 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:17.180 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:17.180 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:17.180 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:17.180 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:17.180 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:17.180 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:14:17.180 00:14:17.180 --- 10.0.0.3 ping statistics --- 00:14:17.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.180 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:14:17.180 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:17.180 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:17.180 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:14:17.180 00:14:17.180 --- 10.0.0.4 ping statistics --- 00:14:17.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.180 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:14:17.180 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:17.180 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:17.180 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:14:17.180 00:14:17.180 --- 10.0.0.1 ping statistics --- 00:14:17.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.180 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:14:17.180 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:17.180 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:17.180 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:14:17.180 00:14:17.180 --- 10.0.0.2 ping statistics --- 00:14:17.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.180 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:14:17.180 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:17.180 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:14:17.180 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:17.180 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:17.180 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:17.180 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:17.180 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:17.180 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:17.180 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:17.180 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:14:17.180 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:17.180 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:17.180 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:14:17.180 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=74972 00:14:17.180 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 74972 00:14:17.180 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 74972 ']' 00:14:17.180 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.180 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:17.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:17.180 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.180 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:17.180 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:17.180 19:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:14:17.180 [2024-11-26 19:48:12.263792] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:14:17.180 [2024-11-26 19:48:12.263847] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:17.180 [2024-11-26 19:48:12.404229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:17.440 [2024-11-26 19:48:12.439589] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:17.440 [2024-11-26 19:48:12.439634] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:17.440 [2024-11-26 19:48:12.439641] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:17.440 [2024-11-26 19:48:12.439646] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:17.440 [2024-11-26 19:48:12.439650] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:17.440 [2024-11-26 19:48:12.440365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:17.440 [2024-11-26 19:48:12.440586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.440 [2024-11-26 19:48:12.471678] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:18.004 19:48:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:18.004 19:48:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:14:18.004 19:48:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:18.004 19:48:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:18.004 19:48:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:14:18.004 19:48:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:18.004 19:48:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=74972 00:14:18.004 19:48:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:18.262 [2024-11-26 19:48:13.348412] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:18.262 19:48:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:14:18.520 Malloc0 00:14:18.520 19:48:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:14:18.777 19:48:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:18.777 19:48:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:19.035 [2024-11-26 19:48:14.173052] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:19.035 19:48:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:14:19.294 [2024-11-26 19:48:14.381202] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:14:19.294 19:48:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:14:19.294 19:48:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=75028 00:14:19.294 19:48:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:19.294 19:48:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 75028 /var/tmp/bdevperf.sock 00:14:19.294 19:48:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 75028 ']' 00:14:19.294 19:48:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:19.294 19:48:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:19.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:19.294 19:48:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:19.294 19:48:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:19.294 19:48:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:14:19.552 19:48:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:19.552 19:48:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:14:19.552 19:48:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:14:19.809 19:48:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:14:20.066 Nvme0n1 00:14:20.066 19:48:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:14:20.324 Nvme0n1 00:14:20.324 19:48:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:14:20.324 19:48:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:14:22.852 19:48:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:14:22.852 19:48:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:14:22.852 19:48:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:14:22.852 19:48:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:14:23.785 19:48:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:14:23.785 19:48:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:14:23.785 19:48:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:23.785 19:48:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:14:24.042 19:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:24.043 19:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:14:24.043 19:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:14:24.043 19:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:24.300 19:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:14:24.300 19:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:14:24.300 19:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:24.300 19:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:14:24.300 19:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:24.300 19:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:14:24.300 19:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:24.300 19:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:14:24.557 19:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:24.557 19:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:14:24.557 19:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:24.557 19:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:14:24.814 19:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:24.814 19:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:14:24.814 19:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:24.814 19:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:14:25.071 19:48:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:25.071 19:48:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:14:25.071 19:48:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:14:25.329 19:48:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:14:25.329 19:48:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:14:26.369 19:48:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:14:26.369 19:48:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:14:26.369 19:48:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:26.369 19:48:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:14:26.627 19:48:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:14:26.627 19:48:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:14:26.627 19:48:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:26.627 19:48:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:14:26.886 19:48:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:26.886 19:48:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:14:26.886 19:48:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:26.886 19:48:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:14:27.144 19:48:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:27.144 19:48:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:14:27.144 19:48:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:14:27.144 19:48:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:27.144 19:48:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:27.144 19:48:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:14:27.144 19:48:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:27.144 19:48:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:14:27.403 19:48:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:27.403 19:48:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:14:27.403 19:48:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:27.403 19:48:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:14:27.661 19:48:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:27.661 19:48:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:14:27.661 19:48:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:14:27.919 19:48:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:14:28.177 19:48:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:14:29.115 19:48:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:14:29.115 19:48:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:14:29.115 19:48:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:29.115 19:48:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:14:29.374 19:48:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:29.374 19:48:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:14:29.374 19:48:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:14:29.374 19:48:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:29.633 19:48:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:14:29.633 19:48:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:14:29.633 19:48:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:14:29.633 19:48:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:29.633 19:48:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:29.633 19:48:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:14:29.633 19:48:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:14:29.633 19:48:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:29.891 19:48:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:29.891 19:48:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:14:29.892 19:48:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:29.892 19:48:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:14:30.149 19:48:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:30.149 19:48:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:14:30.149 19:48:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:30.149 19:48:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:14:30.406 19:48:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:30.406 19:48:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:14:30.406 19:48:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:14:30.664 19:48:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:14:30.922 19:48:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:14:31.855 19:48:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:14:31.855 19:48:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:14:31.855 19:48:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:31.855 19:48:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:14:32.112 19:48:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:32.112 19:48:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:14:32.112 19:48:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:32.112 19:48:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:14:32.370 19:48:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:14:32.370 19:48:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:14:32.370 19:48:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:32.370 19:48:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:14:32.370 19:48:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:32.370 19:48:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:14:32.370 19:48:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:14:32.370 19:48:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:32.632 19:48:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:32.632 19:48:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:14:32.632 19:48:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:14:32.632 19:48:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:32.890 19:48:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:32.890 19:48:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:14:32.890 19:48:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:14:32.890 19:48:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:33.149 19:48:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:14:33.149 19:48:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:14:33.149 19:48:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:14:33.406 19:48:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:14:33.406 19:48:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:14:34.777 19:48:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:14:34.777 19:48:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:14:34.777 19:48:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:34.777 19:48:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:14:34.777 19:48:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:14:34.777 19:48:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:14:34.777 19:48:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:34.777 19:48:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:14:35.035 19:48:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:14:35.035 19:48:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:14:35.035 19:48:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:14:35.036 19:48:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:35.036 19:48:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:35.036 19:48:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:14:35.036 19:48:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:35.036 19:48:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:14:35.293 19:48:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:35.293 19:48:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:14:35.293 19:48:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:14:35.294 19:48:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:35.551 19:48:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:14:35.551 19:48:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:14:35.551 19:48:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:35.551 19:48:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:14:35.808 19:48:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:14:35.808 19:48:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:14:35.808 19:48:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:14:36.066 19:48:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:14:36.066 19:48:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:14:37.436 19:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:14:37.436 19:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:14:37.436 19:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:14:37.436 19:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:37.436 19:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:14:37.436 19:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:14:37.436 19:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:37.436 19:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:14:37.724 19:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:37.724 19:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:14:37.724 19:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:37.724 19:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:14:37.724 19:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:37.724 19:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:14:37.724 19:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:37.724 19:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:14:37.981 19:48:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:37.981 19:48:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:14:37.981 19:48:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:14:37.981 19:48:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:38.238 19:48:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:14:38.238 19:48:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:14:38.238 19:48:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:38.238 19:48:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:14:38.495 19:48:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:38.495 19:48:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:14:38.751 19:48:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:14:38.751 19:48:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:14:38.751 19:48:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:14:39.008 19:48:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:14:40.382 19:48:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:14:40.382 19:48:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:14:40.382 19:48:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:40.382 19:48:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:14:40.382 19:48:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:40.382 19:48:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:14:40.382 19:48:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:40.382 19:48:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:14:40.382 19:48:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:40.382 19:48:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:14:40.382 19:48:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:40.641 19:48:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:14:40.641 19:48:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:40.641 19:48:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:14:40.641 19:48:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:40.641 19:48:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:14:40.898 19:48:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:40.898 19:48:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:14:40.898 19:48:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:40.898 19:48:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:14:41.156 19:48:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:41.156 19:48:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:14:41.156 19:48:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:41.156 19:48:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:14:41.414 19:48:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:41.414 19:48:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:14:41.414 19:48:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:14:41.671 19:48:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:14:41.927 19:48:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:14:42.861 19:48:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:14:42.861 19:48:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:14:42.861 19:48:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:42.861 19:48:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:14:43.119 19:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:14:43.119 19:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:14:43.119 19:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:43.119 19:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:14:43.377 19:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:43.377 19:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:14:43.377 19:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:14:43.378 19:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:43.378 19:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:43.378 19:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:14:43.378 19:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:43.378 19:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:14:43.636 19:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:43.636 19:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:14:43.636 19:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:14:43.636 19:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:43.894 19:48:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:43.894 19:48:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:14:43.894 19:48:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:43.894 19:48:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:14:44.160 19:48:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:44.160 19:48:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:14:44.160 19:48:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:14:44.418 19:48:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:14:44.677 19:48:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:14:45.612 19:48:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:14:45.612 19:48:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:14:45.612 19:48:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:45.612 19:48:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:14:45.870 19:48:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:45.870 19:48:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:14:45.870 19:48:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:14:45.870 19:48:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:45.870 19:48:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:45.870 19:48:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:14:45.870 19:48:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:45.870 19:48:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:14:46.128 19:48:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:46.128 19:48:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:14:46.128 19:48:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:14:46.128 19:48:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:46.386 19:48:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:46.386 19:48:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:14:46.386 19:48:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:46.386 19:48:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:14:46.644 19:48:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:46.644 19:48:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:14:46.644 19:48:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:46.644 19:48:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:14:46.902 19:48:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:46.902 19:48:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:14:46.902 19:48:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:14:47.159 19:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:14:47.159 19:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:14:48.532 19:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:14:48.532 19:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:14:48.532 19:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:48.532 19:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:14:48.532 19:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:48.532 19:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:14:48.532 19:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:14:48.532 19:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:48.790 19:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:14:48.790 19:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:14:48.790 19:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:48.790 19:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:14:49.048 19:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:49.048 19:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:14:49.048 19:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:14:49.048 19:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:49.306 19:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:49.306 19:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:14:49.306 19:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:49.306 19:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:14:49.306 19:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:14:49.306 19:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:14:49.306 19:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:14:49.306 19:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:14:49.564 19:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:14:49.564 19:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 75028 00:14:49.564 19:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 75028 ']' 00:14:49.564 19:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 75028 00:14:49.564 19:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:14:49.564 19:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:49.564 19:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75028 00:14:49.564 19:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:49.564 19:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:49.564 19:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75028' 00:14:49.564 killing process with pid 75028 00:14:49.564 19:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 75028 00:14:49.564 19:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 75028 00:14:49.564 { 00:14:49.564 "results": [ 00:14:49.564 { 00:14:49.564 "job": "Nvme0n1", 00:14:49.564 "core_mask": "0x4", 00:14:49.564 "workload": "verify", 00:14:49.564 "status": "terminated", 00:14:49.564 "verify_range": { 00:14:49.564 "start": 0, 00:14:49.564 "length": 16384 00:14:49.564 }, 00:14:49.564 "queue_depth": 128, 00:14:49.564 "io_size": 4096, 00:14:49.564 "runtime": 29.217956, 00:14:49.564 "iops": 12417.603750241804, 00:14:49.564 "mibps": 48.50626464938205, 00:14:49.564 "io_failed": 0, 00:14:49.564 "io_timeout": 0, 00:14:49.564 "avg_latency_us": 10287.545756574462, 00:14:49.564 "min_latency_us": 382.8184615384615, 00:14:49.564 "max_latency_us": 3019898.88 00:14:49.564 } 00:14:49.564 ], 00:14:49.564 "core_count": 1 00:14:49.564 } 00:14:49.825 19:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 75028 00:14:49.825 19:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:14:49.825 [2024-11-26 19:48:14.427843] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:14:49.825 [2024-11-26 19:48:14.427920] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75028 ] 00:14:49.825 [2024-11-26 19:48:14.563433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.825 [2024-11-26 19:48:14.598841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:49.825 [2024-11-26 19:48:14.628811] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:49.825 Running I/O for 90 seconds... 00:14:49.825 9351.00 IOPS, 36.53 MiB/s [2024-11-26T19:48:45.072Z] 11183.00 IOPS, 43.68 MiB/s [2024-11-26T19:48:45.072Z] 11615.33 IOPS, 45.37 MiB/s [2024-11-26T19:48:45.072Z] 11869.50 IOPS, 46.37 MiB/s [2024-11-26T19:48:45.072Z] 12057.00 IOPS, 47.10 MiB/s [2024-11-26T19:48:45.072Z] 12288.83 IOPS, 48.00 MiB/s [2024-11-26T19:48:45.072Z] 12455.57 IOPS, 48.65 MiB/s [2024-11-26T19:48:45.072Z] 12550.25 IOPS, 49.02 MiB/s [2024-11-26T19:48:45.072Z] 12560.22 IOPS, 49.06 MiB/s [2024-11-26T19:48:45.072Z] 12547.40 IOPS, 49.01 MiB/s [2024-11-26T19:48:45.072Z] 12545.64 IOPS, 49.01 MiB/s [2024-11-26T19:48:45.072Z] 12524.17 IOPS, 48.92 MiB/s [2024-11-26T19:48:45.072Z] [2024-11-26 19:48:28.400854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:104848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.825 [2024-11-26 19:48:28.400910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:14:49.825 [2024-11-26 19:48:28.400945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:104856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.825 [2024-11-26 19:48:28.400954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:14:49.825 [2024-11-26 19:48:28.400968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:104864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.825 [2024-11-26 19:48:28.400975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:14:49.825 [2024-11-26 19:48:28.400988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:104872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.825 [2024-11-26 19:48:28.400995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:14:49.825 [2024-11-26 19:48:28.401008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:104880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.825 [2024-11-26 19:48:28.401015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:14:49.825 [2024-11-26 19:48:28.401028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:104888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.825 [2024-11-26 19:48:28.401035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:14:49.825 [2024-11-26 19:48:28.401048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:104896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.825 [2024-11-26 19:48:28.401055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:14:49.825 [2024-11-26 19:48:28.401067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:104904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.826 [2024-11-26 19:48:28.401075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:14:49.826 [2024-11-26 19:48:28.401090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:104912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.826 [2024-11-26 19:48:28.401097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:14:49.826 [2024-11-26 19:48:28.401132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:104920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.826 [2024-11-26 19:48:28.401140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:14:49.826 [2024-11-26 19:48:28.401153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:104928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.826 [2024-11-26 19:48:28.401159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:14:49.826 [2024-11-26 19:48:28.401172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.826 [2024-11-26 19:48:28.401179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:14:49.826 [2024-11-26 19:48:28.401192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:104944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.826 [2024-11-26 19:48:28.401199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:14:49.826 [2024-11-26 19:48:28.401211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.826 [2024-11-26 19:48:28.401219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:14:49.826 [2024-11-26 19:48:28.401232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:104960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.826 [2024-11-26 19:48:28.401239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:14:49.826 [2024-11-26 19:48:28.401252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:104968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.826 [2024-11-26 19:48:28.401259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:14:49.826 [2024-11-26 19:48:28.401272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:104400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.826 [2024-11-26 19:48:28.401280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:14:49.826 [2024-11-26 19:48:28.401294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:104408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.826 [2024-11-26 19:48:28.401302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:14:49.826 [2024-11-26 19:48:28.401315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:104416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.826 [2024-11-26 19:48:28.401322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:14:49.826 [2024-11-26 19:48:28.401335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:104424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.826 [2024-11-26 19:48:28.401342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:14:49.826 [2024-11-26 19:48:28.401355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.826 [2024-11-26 19:48:28.401362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:14:49.826 [2024-11-26 19:48:28.401375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:104440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.826 [2024-11-26 19:48:28.401388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:14:49.826 [2024-11-26 19:48:28.401401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:104448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.826 [2024-11-26 19:48:28.401408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:14:49.826 [2024-11-26 19:48:28.401421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:104456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.826 [2024-11-26 19:48:28.401429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:14:49.826 [2024-11-26 19:48:28.401442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:104464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.826 [2024-11-26 19:48:28.401449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:14:49.826 [2024-11-26 19:48:28.401462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:104472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.826 [2024-11-26 19:48:28.401469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:14:49.826 [2024-11-26 19:48:28.401482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:104480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.826 [2024-11-26 19:48:28.401489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:14:49.826 [2024-11-26 19:48:28.401502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:104488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.826 [2024-11-26 19:48:28.401510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:14:49.826 [2024-11-26 19:48:28.401523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.826 [2024-11-26 19:48:28.401530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:14:49.826 [2024-11-26 19:48:28.401543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:104504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.826 [2024-11-26 19:48:28.401550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:14:49.826 [2024-11-26 19:48:28.401563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:104512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.826 [2024-11-26 19:48:28.401570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:14:49.826 [2024-11-26 19:48:28.401583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:104520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.826 [2024-11-26 19:48:28.401590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:14:49.826 [2024-11-26 19:48:28.401604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:104976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.826 [2024-11-26 19:48:28.401613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:14:49.826 [2024-11-26 19:48:28.401626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:104984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.826 [2024-11-26 19:48:28.401638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:14:49.826 [2024-11-26 19:48:28.401651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:104992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.826 [2024-11-26 19:48:28.401658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:14:49.826 [2024-11-26 19:48:28.401671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:105000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.826 [2024-11-26 19:48:28.401679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:14:49.826 [2024-11-26 19:48:28.401692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:105008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.826 [2024-11-26 19:48:28.401699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:14:49.826 [2024-11-26 19:48:28.401712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:105016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.826 [2024-11-26 19:48:28.401719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:14:49.826 [2024-11-26 19:48:28.401732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:105024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.826 [2024-11-26 19:48:28.401740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:14:49.826 [2024-11-26 19:48:28.401752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:105032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.826 [2024-11-26 19:48:28.401759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:14:49.826 [2024-11-26 19:48:28.401782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:105040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.826 [2024-11-26 19:48:28.401790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:14:49.826 [2024-11-26 19:48:28.401803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:105048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.826 [2024-11-26 19:48:28.401810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:14:49.827 [2024-11-26 19:48:28.401823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:105056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.827 [2024-11-26 19:48:28.401830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:14:49.827 [2024-11-26 19:48:28.401843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:105064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.827 [2024-11-26 19:48:28.401851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:49.827 [2024-11-26 19:48:28.401864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:105072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.827 [2024-11-26 19:48:28.401871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:14:49.827 [2024-11-26 19:48:28.401884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:105080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.827 [2024-11-26 19:48:28.401895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:14:49.827 [2024-11-26 19:48:28.401908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:105088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.827 [2024-11-26 19:48:28.401915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:14:49.827 [2024-11-26 19:48:28.401928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:105096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.827 [2024-11-26 19:48:28.401935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:14:49.827 [2024-11-26 19:48:28.401948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:104528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.827 [2024-11-26 19:48:28.401956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:14:49.827 [2024-11-26 19:48:28.401969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:104536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.827 [2024-11-26 19:48:28.401977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:14:49.827 [2024-11-26 19:48:28.401990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:104544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.827 [2024-11-26 19:48:28.401997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:14:49.827 [2024-11-26 19:48:28.402010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:104552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.827 [2024-11-26 19:48:28.402017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:14:49.827 [2024-11-26 19:48:28.402030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:104560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.827 [2024-11-26 19:48:28.402038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:14:49.827 [2024-11-26 19:48:28.402050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:104568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.827 [2024-11-26 19:48:28.402058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:14:49.827 [2024-11-26 19:48:28.402071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:104576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.827 [2024-11-26 19:48:28.402078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:14:49.827 [2024-11-26 19:48:28.402092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:104584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.827 [2024-11-26 19:48:28.402099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:14:49.827 [2024-11-26 19:48:28.402113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:105104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.827 [2024-11-26 19:48:28.402121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:14:49.827 [2024-11-26 19:48:28.402134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:105112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.827 [2024-11-26 19:48:28.402142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:14:49.827 [2024-11-26 19:48:28.402158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:105120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.827 [2024-11-26 19:48:28.402165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:14:49.827 [2024-11-26 19:48:28.402178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:105128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.827 [2024-11-26 19:48:28.402186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:14:49.827 [2024-11-26 19:48:28.402199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:105136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.827 [2024-11-26 19:48:28.402206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:14:49.827 [2024-11-26 19:48:28.402219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:105144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.827 [2024-11-26 19:48:28.402226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:14:49.827 [2024-11-26 19:48:28.402239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:105152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.827 [2024-11-26 19:48:28.402246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:14:49.827 [2024-11-26 19:48:28.402259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:105160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.827 [2024-11-26 19:48:28.402266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:14:49.827 [2024-11-26 19:48:28.402454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:105168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.827 [2024-11-26 19:48:28.402466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:14:49.827 [2024-11-26 19:48:28.402484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:105176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.827 [2024-11-26 19:48:28.402492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:14:49.827 [2024-11-26 19:48:28.402509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.827 [2024-11-26 19:48:28.402516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:14:49.827 [2024-11-26 19:48:28.402532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:105192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.827 [2024-11-26 19:48:28.402540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:14:49.827 [2024-11-26 19:48:28.402556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:105200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.827 [2024-11-26 19:48:28.402563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:14:49.827 [2024-11-26 19:48:28.402580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:105208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.827 [2024-11-26 19:48:28.402587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:14:49.827 [2024-11-26 19:48:28.402608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.827 [2024-11-26 19:48:28.402615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:14:49.827 [2024-11-26 19:48:28.402632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:105224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.827 [2024-11-26 19:48:28.402639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:14:49.827 [2024-11-26 19:48:28.402655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:104592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.827 [2024-11-26 19:48:28.402663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:14:49.827 [2024-11-26 19:48:28.402679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:104600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.827 [2024-11-26 19:48:28.402686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:14:49.827 [2024-11-26 19:48:28.402703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:104608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.827 [2024-11-26 19:48:28.402710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.827 [2024-11-26 19:48:28.402727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:104616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.827 [2024-11-26 19:48:28.402734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:49.827 [2024-11-26 19:48:28.402750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:104624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.827 [2024-11-26 19:48:28.402758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:14:49.827 [2024-11-26 19:48:28.402784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:104632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.827 [2024-11-26 19:48:28.402792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:14:49.827 [2024-11-26 19:48:28.402808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:104640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.827 [2024-11-26 19:48:28.402817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:14:49.828 [2024-11-26 19:48:28.402833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.828 [2024-11-26 19:48:28.402841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:14:49.828 [2024-11-26 19:48:28.402857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:104656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.828 [2024-11-26 19:48:28.402871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:14:49.828 [2024-11-26 19:48:28.402888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.828 [2024-11-26 19:48:28.402895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:14:49.828 [2024-11-26 19:48:28.402911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:104672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.828 [2024-11-26 19:48:28.402923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:14:49.828 [2024-11-26 19:48:28.402939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:104680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.828 [2024-11-26 19:48:28.402946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:14:49.828 [2024-11-26 19:48:28.402963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.828 [2024-11-26 19:48:28.402970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:14:49.828 [2024-11-26 19:48:28.402987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:104696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.828 [2024-11-26 19:48:28.403002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:14:49.828 [2024-11-26 19:48:28.403018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:104704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.828 [2024-11-26 19:48:28.403026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:14:49.828 [2024-11-26 19:48:28.403042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:104712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.828 [2024-11-26 19:48:28.403050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:14:49.828 [2024-11-26 19:48:28.403140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.828 [2024-11-26 19:48:28.403149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:14:49.828 [2024-11-26 19:48:28.403167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:105240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.828 [2024-11-26 19:48:28.403175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:49.828 [2024-11-26 19:48:28.403193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:105248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.828 [2024-11-26 19:48:28.403201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:49.828 [2024-11-26 19:48:28.403218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:105256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.828 [2024-11-26 19:48:28.403225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:49.828 [2024-11-26 19:48:28.403243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:105264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.828 [2024-11-26 19:48:28.403250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:49.828 [2024-11-26 19:48:28.403268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:105272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.828 [2024-11-26 19:48:28.403275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:14:49.828 [2024-11-26 19:48:28.403293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:105280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.828 [2024-11-26 19:48:28.403305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:14:49.828 [2024-11-26 19:48:28.403323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:105288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.828 [2024-11-26 19:48:28.403330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:14:49.828 [2024-11-26 19:48:28.403348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:105296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.828 [2024-11-26 19:48:28.403357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:14:49.828 [2024-11-26 19:48:28.403374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:105304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.828 [2024-11-26 19:48:28.403382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:14:49.828 [2024-11-26 19:48:28.403400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:105312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.828 [2024-11-26 19:48:28.403407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:14:49.828 [2024-11-26 19:48:28.403424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:105320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.828 [2024-11-26 19:48:28.403432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:14:49.828 [2024-11-26 19:48:28.403449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:105328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.828 [2024-11-26 19:48:28.403456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:14:49.828 [2024-11-26 19:48:28.403474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:105336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.828 [2024-11-26 19:48:28.403481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:14:49.828 [2024-11-26 19:48:28.403498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:105344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.828 [2024-11-26 19:48:28.403506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:14:49.828 [2024-11-26 19:48:28.403523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:105352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.828 [2024-11-26 19:48:28.403530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:14:49.828 [2024-11-26 19:48:28.403548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:104720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.828 [2024-11-26 19:48:28.403555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:14:49.828 [2024-11-26 19:48:28.403572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:104728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.828 [2024-11-26 19:48:28.403580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:14:49.828 [2024-11-26 19:48:28.403597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:104736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.828 [2024-11-26 19:48:28.403609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:14:49.828 [2024-11-26 19:48:28.403627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:104744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.828 [2024-11-26 19:48:28.403634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:49.828 [2024-11-26 19:48:28.403652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:104752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.828 [2024-11-26 19:48:28.403659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:49.828 [2024-11-26 19:48:28.403677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:104760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.828 [2024-11-26 19:48:28.403684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:49.828 [2024-11-26 19:48:28.403702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:104768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.828 [2024-11-26 19:48:28.403710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:49.828 [2024-11-26 19:48:28.403727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.828 [2024-11-26 19:48:28.403734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:49.828 [2024-11-26 19:48:28.403751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:104784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.828 [2024-11-26 19:48:28.403761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:49.828 [2024-11-26 19:48:28.403786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:104792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.828 [2024-11-26 19:48:28.403794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:49.828 [2024-11-26 19:48:28.403811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:104800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.828 [2024-11-26 19:48:28.403819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:49.829 [2024-11-26 19:48:28.403837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:104808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.829 [2024-11-26 19:48:28.403845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:14:49.829 [2024-11-26 19:48:28.403862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:104816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.829 [2024-11-26 19:48:28.403870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:14:49.829 [2024-11-26 19:48:28.403887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:104824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.829 [2024-11-26 19:48:28.403894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:14:49.829 [2024-11-26 19:48:28.403912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:104832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.829 [2024-11-26 19:48:28.403919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:49.829 [2024-11-26 19:48:28.403941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:104840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.829 [2024-11-26 19:48:28.403948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:49.829 [2024-11-26 19:48:28.403968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:105360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.829 [2024-11-26 19:48:28.403975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:49.829 [2024-11-26 19:48:28.403993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:105368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.829 [2024-11-26 19:48:28.404000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:49.829 [2024-11-26 19:48:28.404018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:105376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.829 [2024-11-26 19:48:28.404025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:14:49.829 [2024-11-26 19:48:28.404043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:105384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.829 [2024-11-26 19:48:28.404050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:14:49.829 [2024-11-26 19:48:28.404067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:105392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.829 [2024-11-26 19:48:28.404075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:14:49.829 [2024-11-26 19:48:28.404092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:105400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.829 [2024-11-26 19:48:28.404099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:14:49.829 [2024-11-26 19:48:28.404117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:105408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.829 [2024-11-26 19:48:28.404124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:14:49.829 [2024-11-26 19:48:28.404142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:105416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.829 [2024-11-26 19:48:28.404150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:14:49.829 12346.62 IOPS, 48.23 MiB/s [2024-11-26T19:48:45.076Z] 11464.71 IOPS, 44.78 MiB/s [2024-11-26T19:48:45.076Z] 10700.40 IOPS, 41.80 MiB/s [2024-11-26T19:48:45.076Z] 10171.94 IOPS, 39.73 MiB/s [2024-11-26T19:48:45.076Z] 10363.24 IOPS, 40.48 MiB/s [2024-11-26T19:48:45.076Z] 10530.22 IOPS, 41.13 MiB/s [2024-11-26T19:48:45.076Z] 10814.53 IOPS, 42.24 MiB/s [2024-11-26T19:48:45.076Z] 11173.05 IOPS, 43.64 MiB/s [2024-11-26T19:48:45.076Z] 11493.86 IOPS, 44.90 MiB/s [2024-11-26T19:48:45.076Z] 11612.86 IOPS, 45.36 MiB/s [2024-11-26T19:48:45.076Z] 11690.65 IOPS, 45.67 MiB/s [2024-11-26T19:48:45.076Z] 11764.38 IOPS, 45.95 MiB/s [2024-11-26T19:48:45.076Z] 11997.28 IOPS, 46.86 MiB/s [2024-11-26T19:48:45.076Z] 12229.04 IOPS, 47.77 MiB/s [2024-11-26T19:48:45.076Z] [2024-11-26 19:48:42.357305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:34968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.829 [2024-11-26 19:48:42.357360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:14:49.829 [2024-11-26 19:48:42.357393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.829 [2024-11-26 19:48:42.357454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:14:49.829 [2024-11-26 19:48:42.357468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:35040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.829 [2024-11-26 19:48:42.357475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:14:49.829 [2024-11-26 19:48:42.357488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:35336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.829 [2024-11-26 19:48:42.357495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:14:49.829 [2024-11-26 19:48:42.357508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:35352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.829 [2024-11-26 19:48:42.357515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:14:49.829 [2024-11-26 19:48:42.357527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.829 [2024-11-26 19:48:42.357534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:14:49.829 [2024-11-26 19:48:42.357546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:35384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.829 [2024-11-26 19:48:42.357553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:14:49.829 [2024-11-26 19:48:42.357565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:35400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.829 [2024-11-26 19:48:42.357572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:14:49.829 [2024-11-26 19:48:42.357584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:35080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.829 [2024-11-26 19:48:42.357591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:14:49.829 [2024-11-26 19:48:42.357604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:35112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.829 [2024-11-26 19:48:42.357611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:14:49.829 [2024-11-26 19:48:42.357623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:35160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.829 [2024-11-26 19:48:42.357630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:14:49.829 [2024-11-26 19:48:42.357643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:34744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.829 [2024-11-26 19:48:42.357649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:14:49.829 [2024-11-26 19:48:42.357661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:34776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.829 [2024-11-26 19:48:42.357668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:14:49.829 [2024-11-26 19:48:42.357681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.829 [2024-11-26 19:48:42.357687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:14:49.829 [2024-11-26 19:48:42.357705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.829 [2024-11-26 19:48:42.357712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:14:49.829 [2024-11-26 19:48:42.357724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:35448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.829 [2024-11-26 19:48:42.357731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:14:49.829 [2024-11-26 19:48:42.357744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:34808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.829 [2024-11-26 19:48:42.357751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:14:49.829 [2024-11-26 19:48:42.357775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:35464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.829 [2024-11-26 19:48:42.357783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:14:49.829 [2024-11-26 19:48:42.357795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:34840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.829 [2024-11-26 19:48:42.357803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:14:49.829 [2024-11-26 19:48:42.357815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:35480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.829 [2024-11-26 19:48:42.357822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:14:49.829 [2024-11-26 19:48:42.357835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.829 [2024-11-26 19:48:42.357841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:14:49.829 [2024-11-26 19:48:42.357854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:35512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.830 [2024-11-26 19:48:42.357860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:14:49.830 [2024-11-26 19:48:42.357873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:35528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.830 [2024-11-26 19:48:42.357880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:14:49.830 [2024-11-26 19:48:42.357892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:35544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.830 [2024-11-26 19:48:42.357898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:14:49.830 [2024-11-26 19:48:42.357911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:35184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.830 [2024-11-26 19:48:42.357918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:14:49.830 [2024-11-26 19:48:42.357930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:35216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.830 [2024-11-26 19:48:42.357937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:14:49.830 [2024-11-26 19:48:42.357957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.830 [2024-11-26 19:48:42.357964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:14:49.830 [2024-11-26 19:48:42.357976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:35288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.830 [2024-11-26 19:48:42.357983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:14:49.830 [2024-11-26 19:48:42.357995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:35568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.830 [2024-11-26 19:48:42.358002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:14:49.830 [2024-11-26 19:48:42.358014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:35304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.830 [2024-11-26 19:48:42.358021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:14:49.830 [2024-11-26 19:48:42.358033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:34856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.830 [2024-11-26 19:48:42.358040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:14:49.830 [2024-11-26 19:48:42.358052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:34888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.830 [2024-11-26 19:48:42.358060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:14:49.830 [2024-11-26 19:48:42.358072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:34912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.830 [2024-11-26 19:48:42.358079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:14:49.830 [2024-11-26 19:48:42.358092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:35592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.830 [2024-11-26 19:48:42.358100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.830 [2024-11-26 19:48:42.358112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.830 [2024-11-26 19:48:42.358120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:14:49.830 [2024-11-26 19:48:42.358132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:35624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.830 [2024-11-26 19:48:42.358138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:14:49.830 [2024-11-26 19:48:42.358151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:35640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.830 [2024-11-26 19:48:42.358158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:14:49.830 [2024-11-26 19:48:42.358171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:35656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.830 [2024-11-26 19:48:42.358178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:14:49.830 [2024-11-26 19:48:42.358190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:34944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.830 [2024-11-26 19:48:42.358201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:14:49.830 [2024-11-26 19:48:42.358214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:35664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.830 [2024-11-26 19:48:42.358221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:14:49.830 [2024-11-26 19:48:42.358234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:35344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.830 [2024-11-26 19:48:42.358241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:14:49.830 [2024-11-26 19:48:42.358253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:35376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.830 [2024-11-26 19:48:42.358261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:14:49.830 [2024-11-26 19:48:42.358273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:34976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.830 [2024-11-26 19:48:42.358280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:14:49.830 [2024-11-26 19:48:42.358293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:35008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.830 [2024-11-26 19:48:42.358300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:14:49.830 [2024-11-26 19:48:42.358313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:35032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.830 [2024-11-26 19:48:42.358321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:14:49.830 [2024-11-26 19:48:42.358334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:35688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.830 [2024-11-26 19:48:42.358340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:14:49.830 [2024-11-26 19:48:42.358353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:35704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.830 [2024-11-26 19:48:42.358361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:14:49.830 [2024-11-26 19:48:42.358374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:35720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.830 [2024-11-26 19:48:42.358381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:14:49.830 [2024-11-26 19:48:42.358394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.830 [2024-11-26 19:48:42.358401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:49.830 [2024-11-26 19:48:42.358415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:35064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.830 [2024-11-26 19:48:42.358422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:49.830 [2024-11-26 19:48:42.358446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:35424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.831 [2024-11-26 19:48:42.358459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:49.831 [2024-11-26 19:48:42.358472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:35744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.831 [2024-11-26 19:48:42.358479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:49.831 [2024-11-26 19:48:42.358492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.831 [2024-11-26 19:48:42.358499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:14:49.831 [2024-11-26 19:48:42.358511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:35776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.831 [2024-11-26 19:48:42.358518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:14:49.831 [2024-11-26 19:48:42.358531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:35792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.831 [2024-11-26 19:48:42.358538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:14:49.831 [2024-11-26 19:48:42.358550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:35808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.831 [2024-11-26 19:48:42.358558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:14:49.831 [2024-11-26 19:48:42.358571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:35456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.831 [2024-11-26 19:48:42.358578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:14:49.831 [2024-11-26 19:48:42.358591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:35472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.831 [2024-11-26 19:48:42.358598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:14:49.831 [2024-11-26 19:48:42.358610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:35504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.831 [2024-11-26 19:48:42.358618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:14:49.831 [2024-11-26 19:48:42.358631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:35536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.831 [2024-11-26 19:48:42.358638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:14:49.831 [2024-11-26 19:48:42.358651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:35560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.831 [2024-11-26 19:48:42.358657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:14:49.831 [2024-11-26 19:48:42.358670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:35104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.831 [2024-11-26 19:48:42.358677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:14:49.831 [2024-11-26 19:48:42.358689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:35128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.831 [2024-11-26 19:48:42.358696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:14:49.831 [2024-11-26 19:48:42.358713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:35152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.831 [2024-11-26 19:48:42.358721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:14:49.831 [2024-11-26 19:48:42.358734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:35840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.831 [2024-11-26 19:48:42.358741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:14:49.831 [2024-11-26 19:48:42.358754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:35856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.831 [2024-11-26 19:48:42.358761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:14:49.831 [2024-11-26 19:48:42.358818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:35872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.831 [2024-11-26 19:48:42.358826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:49.831 [2024-11-26 19:48:42.359866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.831 [2024-11-26 19:48:42.359888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:49.831 [2024-11-26 19:48:42.359904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:35896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.831 [2024-11-26 19:48:42.359911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:49.831 [2024-11-26 19:48:42.359924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:35584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.831 [2024-11-26 19:48:42.359931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:49.831 [2024-11-26 19:48:42.359944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:35616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.831 [2024-11-26 19:48:42.359951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:49.831 [2024-11-26 19:48:42.359964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.831 [2024-11-26 19:48:42.359971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:49.831 [2024-11-26 19:48:42.359984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:35192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.831 [2024-11-26 19:48:42.359990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:49.831 [2024-11-26 19:48:42.360003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:35224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.831 [2024-11-26 19:48:42.360011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:49.831 [2024-11-26 19:48:42.360023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:35256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.831 [2024-11-26 19:48:42.360030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:14:49.831 [2024-11-26 19:48:42.360051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:35280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.831 [2024-11-26 19:48:42.360058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:14:49.831 [2024-11-26 19:48:42.360071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:35912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.831 [2024-11-26 19:48:42.360078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:14:49.831 [2024-11-26 19:48:42.360091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:35928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.831 [2024-11-26 19:48:42.360098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:49.831 [2024-11-26 19:48:42.360111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.831 [2024-11-26 19:48:42.360118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:49.831 [2024-11-26 19:48:42.360130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:35296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.831 [2024-11-26 19:48:42.360137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:49.831 [2024-11-26 19:48:42.360149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:35328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.831 [2024-11-26 19:48:42.360157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:49.831 [2024-11-26 19:48:42.360171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:35968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.831 [2024-11-26 19:48:42.360178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:14:49.831 12398.33 IOPS, 48.43 MiB/s [2024-11-26T19:48:45.078Z] 12409.82 IOPS, 48.48 MiB/s [2024-11-26T19:48:45.078Z] 12417.14 IOPS, 48.50 MiB/s [2024-11-26T19:48:45.078Z] Received shutdown signal, test time was about 29.218633 seconds 00:14:49.831 00:14:49.831 Latency(us) 00:14:49.831 [2024-11-26T19:48:45.078Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:49.831 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:49.831 Verification LBA range: start 0x0 length 0x4000 00:14:49.831 Nvme0n1 : 29.22 12417.60 48.51 0.00 0.00 10287.55 382.82 3019898.88 00:14:49.831 [2024-11-26T19:48:45.079Z] =================================================================================================================== 00:14:49.832 [2024-11-26T19:48:45.079Z] Total : 12417.60 48.51 0.00 0.00 10287.55 382.82 3019898.88 00:14:49.832 19:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:50.091 19:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:14:50.091 19:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:14:50.091 19:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:14:50.091 19:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:50.091 19:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:14:50.657 rmmod nvme_tcp 00:14:50.657 19:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:50.657 19:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:14:50.657 19:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:50.657 19:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:50.657 rmmod nvme_fabrics 00:14:50.657 rmmod nvme_keyring 00:14:50.657 19:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:50.915 19:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:14:50.915 19:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:14:50.915 19:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 74972 ']' 00:14:50.915 19:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 74972 00:14:50.915 19:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 74972 ']' 00:14:50.915 19:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 74972 00:14:50.915 19:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:14:50.915 19:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:50.915 19:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74972 00:14:50.915 19:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:50.915 19:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:50.915 19:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74972' 00:14:50.915 killing process with pid 74972 00:14:50.915 19:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 74972 00:14:50.915 19:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 74972 00:14:50.915 19:48:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:50.915 19:48:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:50.915 19:48:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:50.915 19:48:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:14:50.915 19:48:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:14:50.915 19:48:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:50.915 19:48:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:14:50.915 19:48:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:50.915 19:48:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:50.915 19:48:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:50.915 19:48:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:50.915 19:48:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:50.915 19:48:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:50.915 19:48:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:50.915 19:48:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:50.915 19:48:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:50.915 19:48:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:50.915 19:48:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:51.173 19:48:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:51.173 19:48:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:51.173 19:48:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:51.173 19:48:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:51.173 19:48:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:51.173 19:48:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.173 19:48:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:51.173 19:48:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.173 19:48:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:14:51.173 00:14:51.173 real 0m34.497s 00:14:51.173 user 1m50.264s 00:14:51.173 sys 0m8.161s 00:14:51.173 19:48:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:51.173 19:48:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:14:51.173 ************************************ 00:14:51.173 END TEST nvmf_host_multipath_status 00:14:51.173 ************************************ 00:14:51.173 19:48:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:14:51.173 19:48:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:51.173 19:48:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:51.173 19:48:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:51.173 ************************************ 00:14:51.173 START TEST nvmf_discovery_remove_ifc 00:14:51.173 ************************************ 00:14:51.173 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:14:51.173 * Looking for test storage... 00:14:51.173 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:51.173 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:51.173 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:14:51.173 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:51.432 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:51.432 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:51.432 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:51.432 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:51.432 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:14:51.432 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:14:51.432 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:14:51.432 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:14:51.432 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:14:51.432 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:14:51.432 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:14:51.432 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:51.432 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:14:51.432 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:14:51.432 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:51.432 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:51.432 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:14:51.432 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:14:51.432 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:51.432 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:14:51.432 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:14:51.432 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:14:51.432 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:14:51.432 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:51.432 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:14:51.432 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:14:51.432 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:51.432 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:51.432 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:14:51.432 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:51.432 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:51.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.432 --rc genhtml_branch_coverage=1 00:14:51.432 --rc genhtml_function_coverage=1 00:14:51.432 --rc genhtml_legend=1 00:14:51.432 --rc geninfo_all_blocks=1 00:14:51.432 --rc geninfo_unexecuted_blocks=1 00:14:51.432 00:14:51.432 ' 00:14:51.432 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:51.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.432 --rc genhtml_branch_coverage=1 00:14:51.432 --rc genhtml_function_coverage=1 00:14:51.432 --rc genhtml_legend=1 00:14:51.432 --rc geninfo_all_blocks=1 00:14:51.432 --rc geninfo_unexecuted_blocks=1 00:14:51.432 00:14:51.432 ' 00:14:51.432 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:51.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.432 --rc genhtml_branch_coverage=1 00:14:51.432 --rc genhtml_function_coverage=1 00:14:51.432 --rc genhtml_legend=1 00:14:51.432 --rc geninfo_all_blocks=1 00:14:51.432 --rc geninfo_unexecuted_blocks=1 00:14:51.432 00:14:51.432 ' 00:14:51.432 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:51.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.432 --rc genhtml_branch_coverage=1 00:14:51.432 --rc genhtml_function_coverage=1 00:14:51.432 --rc genhtml_legend=1 00:14:51.432 --rc geninfo_all_blocks=1 00:14:51.432 --rc geninfo_unexecuted_blocks=1 00:14:51.432 00:14:51.432 ' 00:14:51.432 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:51.432 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:14:51.432 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:51.432 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:51.432 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:51.432 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=91838eb1-5852-43eb-90b2-09876f360ab2 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:51.433 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:51.433 Cannot find device "nvmf_init_br" 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:51.433 Cannot find device "nvmf_init_br2" 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:51.433 Cannot find device "nvmf_tgt_br" 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:51.433 Cannot find device "nvmf_tgt_br2" 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:51.433 Cannot find device "nvmf_init_br" 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:51.433 Cannot find device "nvmf_init_br2" 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:51.433 Cannot find device "nvmf_tgt_br" 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:51.433 Cannot find device "nvmf_tgt_br2" 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:14:51.433 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:51.433 Cannot find device "nvmf_br" 00:14:51.434 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:14:51.434 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:51.434 Cannot find device "nvmf_init_if" 00:14:51.434 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:14:51.434 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:51.434 Cannot find device "nvmf_init_if2" 00:14:51.434 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:14:51.434 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:51.434 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:51.434 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:14:51.434 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:51.434 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:51.434 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:14:51.434 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:51.434 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:51.434 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:51.434 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:51.434 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:51.434 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:51.434 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:51.434 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:51.434 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:51.434 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:51.434 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:51.692 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:51.692 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:51.692 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:51.692 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:51.692 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:51.692 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:51.692 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:51.692 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:51.692 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:51.692 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:51.692 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:51.692 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:51.692 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:51.692 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:51.692 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:51.692 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:51.692 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:51.692 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:51.692 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:51.692 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:51.692 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:51.692 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:51.692 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:51.692 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:14:51.692 00:14:51.693 --- 10.0.0.3 ping statistics --- 00:14:51.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.693 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:14:51.693 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:51.693 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:51.693 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.035 ms 00:14:51.693 00:14:51.693 --- 10.0.0.4 ping statistics --- 00:14:51.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.693 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:14:51.693 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:51.693 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:51.693 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:14:51.693 00:14:51.693 --- 10.0.0.1 ping statistics --- 00:14:51.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.693 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:14:51.693 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:51.693 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:51.693 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:14:51.693 00:14:51.693 --- 10.0.0.2 ping statistics --- 00:14:51.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.693 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:14:51.693 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:51.693 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:14:51.693 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:51.693 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:51.693 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:51.693 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:51.693 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:51.693 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:51.693 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:51.693 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:14:51.693 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:51.693 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:51.693 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:14:51.693 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=75815 00:14:51.693 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 75815 00:14:51.693 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 75815 ']' 00:14:51.693 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:51.693 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.693 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:51.693 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.693 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:51.693 19:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:14:51.693 [2024-11-26 19:48:46.832583] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:14:51.693 [2024-11-26 19:48:46.832646] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:51.950 [2024-11-26 19:48:46.967118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.950 [2024-11-26 19:48:47.002858] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:51.950 [2024-11-26 19:48:47.002899] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:51.950 [2024-11-26 19:48:47.002905] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:51.950 [2024-11-26 19:48:47.002910] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:51.950 [2024-11-26 19:48:47.002914] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:51.950 [2024-11-26 19:48:47.003189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:51.950 [2024-11-26 19:48:47.038229] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:52.516 19:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:52.516 19:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:14:52.516 19:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:52.516 19:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:52.516 19:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:14:52.516 19:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:52.516 19:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:14:52.516 19:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.516 19:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:14:52.774 [2024-11-26 19:48:47.762531] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:52.774 [2024-11-26 19:48:47.770622] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:14:52.774 null0 00:14:52.774 [2024-11-26 19:48:47.802581] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:52.774 19:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.774 19:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=75847 00:14:52.774 19:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 75847 /tmp/host.sock 00:14:52.774 19:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 75847 ']' 00:14:52.774 19:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:14:52.774 19:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:14:52.774 19:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:52.774 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:14:52.774 19:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:14:52.774 19:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:52.774 19:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:14:52.774 [2024-11-26 19:48:47.861841] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:14:52.774 [2024-11-26 19:48:47.861908] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75847 ] 00:14:52.774 [2024-11-26 19:48:48.000936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.031 [2024-11-26 19:48:48.037883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.597 19:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:53.597 19:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:14:53.597 19:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:53.597 19:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:14:53.597 19:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.597 19:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:14:53.597 19:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.597 19:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:14:53.597 19:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.597 19:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:14:53.597 [2024-11-26 19:48:48.748772] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:53.597 19:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.597 19:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:14:53.597 19:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.597 19:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:14:54.970 [2024-11-26 19:48:49.795352] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:14:54.970 [2024-11-26 19:48:49.795385] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:14:54.970 [2024-11-26 19:48:49.795400] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:14:54.970 [2024-11-26 19:48:49.801393] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:14:54.970 [2024-11-26 19:48:49.855733] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:14:54.970 [2024-11-26 19:48:49.856593] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1a82000:1 started. 00:14:54.970 [2024-11-26 19:48:49.858111] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:14:54.970 [2024-11-26 19:48:49.858160] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:14:54.970 [2024-11-26 19:48:49.858180] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:14:54.970 [2024-11-26 19:48:49.858193] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:14:54.970 [2024-11-26 19:48:49.858213] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:14:54.970 19:48:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.970 19:48:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:14:54.970 19:48:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:14:54.970 19:48:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:54.970 19:48:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.970 19:48:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:14:54.970 19:48:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:14:54.970 [2024-11-26 19:48:49.863946] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1a82000 was disconnected and freed. delete nvme_qpair. 00:14:54.970 19:48:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:14:54.970 19:48:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:14:54.970 19:48:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.970 19:48:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:14:54.970 19:48:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:14:54.970 19:48:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:14:54.970 19:48:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:14:54.970 19:48:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:14:54.970 19:48:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:14:54.970 19:48:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:14:54.970 19:48:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:54.970 19:48:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.970 19:48:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:14:54.970 19:48:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:14:54.970 19:48:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.970 19:48:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:14:54.970 19:48:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:14:55.902 19:48:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:14:55.903 19:48:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:14:55.903 19:48:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:55.903 19:48:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:14:55.903 19:48:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:14:55.903 19:48:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.903 19:48:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:14:55.903 19:48:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.903 19:48:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:14:55.903 19:48:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:14:56.834 19:48:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:14:56.834 19:48:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:56.834 19:48:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:14:56.834 19:48:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.834 19:48:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:14:56.834 19:48:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:14:56.834 19:48:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:14:56.834 19:48:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.835 19:48:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:14:56.835 19:48:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:14:58.268 19:48:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:14:58.268 19:48:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:14:58.268 19:48:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:14:58.268 19:48:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:58.268 19:48:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:14:58.268 19:48:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.268 19:48:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:14:58.268 19:48:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.268 19:48:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:14:58.268 19:48:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:14:58.834 19:48:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:14:58.834 19:48:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:58.834 19:48:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.834 19:48:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:14:58.834 19:48:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:14:58.834 19:48:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:14:58.834 19:48:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:14:59.092 19:48:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.092 19:48:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:14:59.092 19:48:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:00.023 19:48:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:00.023 19:48:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:00.023 19:48:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.023 19:48:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:00.023 19:48:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:00.023 19:48:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:00.023 19:48:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:00.023 19:48:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.023 19:48:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:00.023 19:48:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:00.281 [2024-11-26 19:48:55.286582] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:15:00.281 [2024-11-26 19:48:55.286774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:00.281 [2024-11-26 19:48:55.286837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:00.281 [2024-11-26 19:48:55.286865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:00.281 [2024-11-26 19:48:55.286887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:00.281 [2024-11-26 19:48:55.286935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:00.281 [2024-11-26 19:48:55.287066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:00.281 [2024-11-26 19:48:55.287089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:00.281 [2024-11-26 19:48:55.287112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:00.281 [2024-11-26 19:48:55.287134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:00.281 [2024-11-26 19:48:55.287156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:00.281 [2024-11-26 19:48:55.287177] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5e250 is same with the state(6) to be set 00:15:00.281 [2024-11-26 19:48:55.296579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a5e250 (9): Bad file descriptor 00:15:00.281 [2024-11-26 19:48:55.306593] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:15:00.281 [2024-11-26 19:48:55.306652] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:15:00.281 [2024-11-26 19:48:55.306657] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:15:00.281 [2024-11-26 19:48:55.306660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:15:00.281 [2024-11-26 19:48:55.306686] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:15:01.212 19:48:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:01.212 19:48:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:01.212 19:48:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:01.212 19:48:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:01.212 19:48:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.212 19:48:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:01.212 19:48:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:01.212 [2024-11-26 19:48:56.357824] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:15:01.212 [2024-11-26 19:48:56.357911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5e250 with addr=10.0.0.3, port=4420 00:15:01.212 [2024-11-26 19:48:56.357930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5e250 is same with the state(6) to be set 00:15:01.212 [2024-11-26 19:48:56.357960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a5e250 (9): Bad file descriptor 00:15:01.212 [2024-11-26 19:48:56.358477] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:15:01.212 [2024-11-26 19:48:56.358515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:15:01.212 [2024-11-26 19:48:56.358526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:15:01.212 [2024-11-26 19:48:56.358537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:15:01.212 [2024-11-26 19:48:56.358547] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:15:01.212 [2024-11-26 19:48:56.358554] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:15:01.212 [2024-11-26 19:48:56.358560] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:15:01.212 [2024-11-26 19:48:56.358571] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:15:01.212 [2024-11-26 19:48:56.358577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:15:01.212 19:48:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.212 19:48:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:01.212 19:48:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:02.188 [2024-11-26 19:48:57.358618] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:15:02.188 [2024-11-26 19:48:57.358801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:15:02.188 [2024-11-26 19:48:57.358827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:15:02.188 [2024-11-26 19:48:57.358833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:15:02.188 [2024-11-26 19:48:57.358840] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:15:02.188 [2024-11-26 19:48:57.358846] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:15:02.188 [2024-11-26 19:48:57.358851] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:15:02.188 [2024-11-26 19:48:57.358854] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:15:02.188 [2024-11-26 19:48:57.358879] bdev_nvme.c:7235:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:15:02.188 [2024-11-26 19:48:57.358911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:02.188 [2024-11-26 19:48:57.358920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:02.188 [2024-11-26 19:48:57.358929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:02.188 [2024-11-26 19:48:57.358934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:02.188 [2024-11-26 19:48:57.358941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:02.188 [2024-11-26 19:48:57.358946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:02.188 [2024-11-26 19:48:57.358953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:02.188 [2024-11-26 19:48:57.358958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:02.188 [2024-11-26 19:48:57.358965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:02.188 [2024-11-26 19:48:57.358970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:02.188 [2024-11-26 19:48:57.358976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:15:02.188 [2024-11-26 19:48:57.359002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e9a20 (9): Bad file descriptor 00:15:02.188 [2024-11-26 19:48:57.359996] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:15:02.188 [2024-11-26 19:48:57.360005] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:15:02.188 19:48:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:02.188 19:48:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:02.188 19:48:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:02.188 19:48:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.188 19:48:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:02.188 19:48:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:02.188 19:48:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:02.188 19:48:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.188 19:48:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:15:02.188 19:48:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:02.188 19:48:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:02.188 19:48:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:15:02.445 19:48:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:02.445 19:48:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:02.445 19:48:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:02.445 19:48:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:02.445 19:48:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.445 19:48:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:02.445 19:48:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:02.445 19:48:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.445 19:48:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:15:02.445 19:48:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:03.409 19:48:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:03.410 19:48:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:03.410 19:48:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:03.410 19:48:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.410 19:48:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:03.410 19:48:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:03.410 19:48:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:03.410 19:48:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.410 19:48:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:15:03.410 19:48:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:04.340 [2024-11-26 19:48:59.368355] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:15:04.340 [2024-11-26 19:48:59.368380] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:15:04.340 [2024-11-26 19:48:59.368392] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:15:04.340 [2024-11-26 19:48:59.374386] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:15:04.340 [2024-11-26 19:48:59.428742] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:15:04.340 [2024-11-26 19:48:59.429476] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1a69d80:1 started. 00:15:04.340 [2024-11-26 19:48:59.430658] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:15:04.340 [2024-11-26 19:48:59.430774] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:15:04.340 [2024-11-26 19:48:59.430811] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:15:04.340 [2024-11-26 19:48:59.430873] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:15:04.340 [2024-11-26 19:48:59.430904] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:15:04.340 [2024-11-26 19:48:59.437268] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1a69d80 was disconnected and freed. delete nvme_qpair. 00:15:04.340 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:04.340 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:04.340 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:04.340 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.340 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:04.340 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:04.340 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:04.340 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.340 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:15:04.340 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:15:04.340 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 75847 00:15:04.340 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 75847 ']' 00:15:04.340 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 75847 00:15:04.340 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:15:04.340 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:04.340 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75847 00:15:04.340 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:04.340 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:04.340 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75847' 00:15:04.340 killing process with pid 75847 00:15:04.340 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 75847 00:15:04.340 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 75847 00:15:04.597 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:15:04.597 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:04.597 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:15:04.597 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:04.597 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:15:04.597 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:04.597 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:04.597 rmmod nvme_tcp 00:15:04.598 rmmod nvme_fabrics 00:15:04.598 rmmod nvme_keyring 00:15:04.598 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:04.598 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:15:04.598 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:15:04.598 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 75815 ']' 00:15:04.598 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 75815 00:15:04.598 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 75815 ']' 00:15:04.598 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 75815 00:15:04.598 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:15:04.598 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:04.598 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75815 00:15:04.598 killing process with pid 75815 00:15:04.598 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:04.598 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:04.598 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75815' 00:15:04.598 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 75815 00:15:04.598 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 75815 00:15:04.855 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:04.855 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:04.855 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:04.855 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:15:04.855 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:15:04.855 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:15:04.855 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:04.855 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:04.855 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:04.855 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:04.855 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:04.855 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:04.855 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:04.855 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:04.855 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:04.855 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:04.855 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:04.855 19:48:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:04.855 19:49:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:04.855 19:49:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:04.855 19:49:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:04.855 19:49:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:05.113 19:49:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:05.113 19:49:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:05.113 19:49:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:05.113 19:49:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:05.113 19:49:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:15:05.113 00:15:05.113 real 0m13.815s 00:15:05.113 user 0m23.470s 00:15:05.113 sys 0m2.129s 00:15:05.113 19:49:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:05.113 ************************************ 00:15:05.113 END TEST nvmf_discovery_remove_ifc 00:15:05.113 ************************************ 00:15:05.113 19:49:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:05.113 19:49:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:15:05.113 19:49:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:05.113 19:49:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:05.113 19:49:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:05.113 ************************************ 00:15:05.113 START TEST nvmf_identify_kernel_target 00:15:05.113 ************************************ 00:15:05.113 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:15:05.113 * Looking for test storage... 00:15:05.113 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:05.113 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:05.113 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:05.113 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:15:05.113 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:05.113 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:05.113 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:05.113 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:05.113 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:15:05.113 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:15:05.113 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:15:05.113 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:15:05.113 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:15:05.113 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:15:05.113 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:15:05.113 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:05.113 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:15:05.113 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:15:05.113 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:05.113 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:05.113 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:15:05.113 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:15:05.113 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:05.113 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:15:05.113 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:15:05.113 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:15:05.113 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:15:05.113 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:05.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:05.114 --rc genhtml_branch_coverage=1 00:15:05.114 --rc genhtml_function_coverage=1 00:15:05.114 --rc genhtml_legend=1 00:15:05.114 --rc geninfo_all_blocks=1 00:15:05.114 --rc geninfo_unexecuted_blocks=1 00:15:05.114 00:15:05.114 ' 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:05.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:05.114 --rc genhtml_branch_coverage=1 00:15:05.114 --rc genhtml_function_coverage=1 00:15:05.114 --rc genhtml_legend=1 00:15:05.114 --rc geninfo_all_blocks=1 00:15:05.114 --rc geninfo_unexecuted_blocks=1 00:15:05.114 00:15:05.114 ' 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:05.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:05.114 --rc genhtml_branch_coverage=1 00:15:05.114 --rc genhtml_function_coverage=1 00:15:05.114 --rc genhtml_legend=1 00:15:05.114 --rc geninfo_all_blocks=1 00:15:05.114 --rc geninfo_unexecuted_blocks=1 00:15:05.114 00:15:05.114 ' 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:05.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:05.114 --rc genhtml_branch_coverage=1 00:15:05.114 --rc genhtml_function_coverage=1 00:15:05.114 --rc genhtml_legend=1 00:15:05.114 --rc geninfo_all_blocks=1 00:15:05.114 --rc geninfo_unexecuted_blocks=1 00:15:05.114 00:15:05.114 ' 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=91838eb1-5852-43eb-90b2-09876f360ab2 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:05.114 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:05.114 Cannot find device "nvmf_init_br" 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:05.114 Cannot find device "nvmf_init_br2" 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:05.114 Cannot find device "nvmf_tgt_br" 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:15:05.114 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:05.114 Cannot find device "nvmf_tgt_br2" 00:15:05.372 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:15:05.372 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:05.372 Cannot find device "nvmf_init_br" 00:15:05.372 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:15:05.372 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:05.372 Cannot find device "nvmf_init_br2" 00:15:05.372 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:15:05.372 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:05.372 Cannot find device "nvmf_tgt_br" 00:15:05.372 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:15:05.372 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:05.372 Cannot find device "nvmf_tgt_br2" 00:15:05.372 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:15:05.372 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:05.372 Cannot find device "nvmf_br" 00:15:05.372 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:15:05.372 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:05.372 Cannot find device "nvmf_init_if" 00:15:05.372 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:15:05.372 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:05.372 Cannot find device "nvmf_init_if2" 00:15:05.372 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:15:05.372 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:05.372 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:05.372 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:15:05.372 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:05.372 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:05.372 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:15:05.372 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:05.372 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:05.372 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:05.372 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:05.372 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:05.372 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:05.372 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:05.372 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:05.372 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:05.372 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:05.372 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:05.372 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:05.372 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:05.372 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:05.372 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:05.372 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:05.372 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:05.372 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:05.372 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:05.372 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:05.372 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:05.372 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:05.372 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:05.372 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:05.372 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:05.372 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:05.372 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:05.372 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:05.372 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:05.372 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:05.372 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:05.372 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:05.372 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:05.372 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:05.372 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:15:05.372 00:15:05.372 --- 10.0.0.3 ping statistics --- 00:15:05.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:05.373 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:15:05.373 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:05.373 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:05.373 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.027 ms 00:15:05.373 00:15:05.373 --- 10.0.0.4 ping statistics --- 00:15:05.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:05.373 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:15:05.373 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:05.373 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:05.373 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:15:05.373 00:15:05.373 --- 10.0.0.1 ping statistics --- 00:15:05.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:05.373 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:15:05.373 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:05.373 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:05.373 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:15:05.373 00:15:05.373 --- 10.0.0.2 ping statistics --- 00:15:05.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:05.373 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:15:05.373 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:05.373 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:15:05.373 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:05.373 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:05.373 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:05.373 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:05.373 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:05.373 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:05.373 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:05.373 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:15:05.373 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:15:05.373 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:15:05.373 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:05.373 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:05.373 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:05.373 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:05.373 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:05.373 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:05.373 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:05.373 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:05.373 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:05.373 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:15:05.373 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:15:05.373 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:15:05.373 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:15:05.373 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:15:05.373 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:15:05.373 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:15:05.373 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:15:05.373 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:15:05.373 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:15:05.373 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:15:05.373 19:49:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:05.630 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:05.889 Waiting for block devices as requested 00:15:05.889 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:05.889 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:05.889 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:15:05.889 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:15:05.889 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:15:05.889 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:15:05.889 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:05.889 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:05.889 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:15:05.889 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:15:05.889 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:15:06.149 No valid GPT data, bailing 00:15:06.149 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:15:06.149 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:15:06.149 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:15:06.149 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:15:06.149 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:15:06.149 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:15:06.149 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:15:06.149 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:15:06.149 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:06.149 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:06.149 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:15:06.149 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:15:06.149 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:15:06.149 No valid GPT data, bailing 00:15:06.149 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:15:06.149 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:15:06.149 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:15:06.149 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:15:06.149 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:15:06.149 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:15:06.149 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:15:06.149 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:15:06.149 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:06.149 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:06.149 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:15:06.149 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:15:06.149 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:15:06.149 No valid GPT data, bailing 00:15:06.149 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:15:06.149 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:15:06.149 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:15:06.149 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:15:06.149 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:15:06.149 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:15:06.149 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:15:06.149 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:15:06.149 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:06.149 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:06.149 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:15:06.149 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:15:06.149 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:15:06.149 No valid GPT data, bailing 00:15:06.149 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:15:06.149 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:15:06.149 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:15:06.149 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:15:06.149 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:15:06.149 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:15:06.149 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:15:06.149 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:15:06.409 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:15:06.409 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:15:06.409 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:15:06.409 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:15:06.409 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:15:06.409 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:15:06.409 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:15:06.409 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:15:06.409 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:15:06.409 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid=91838eb1-5852-43eb-90b2-09876f360ab2 -a 10.0.0.1 -t tcp -s 4420 00:15:06.409 00:15:06.409 Discovery Log Number of Records 2, Generation counter 2 00:15:06.409 =====Discovery Log Entry 0====== 00:15:06.409 trtype: tcp 00:15:06.409 adrfam: ipv4 00:15:06.409 subtype: current discovery subsystem 00:15:06.409 treq: not specified, sq flow control disable supported 00:15:06.409 portid: 1 00:15:06.409 trsvcid: 4420 00:15:06.409 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:06.409 traddr: 10.0.0.1 00:15:06.409 eflags: none 00:15:06.409 sectype: none 00:15:06.409 =====Discovery Log Entry 1====== 00:15:06.409 trtype: tcp 00:15:06.409 adrfam: ipv4 00:15:06.409 subtype: nvme subsystem 00:15:06.409 treq: not specified, sq flow control disable supported 00:15:06.409 portid: 1 00:15:06.409 trsvcid: 4420 00:15:06.409 subnqn: nqn.2016-06.io.spdk:testnqn 00:15:06.409 traddr: 10.0.0.1 00:15:06.409 eflags: none 00:15:06.409 sectype: none 00:15:06.409 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:15:06.409 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:15:06.409 ===================================================== 00:15:06.409 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:15:06.409 ===================================================== 00:15:06.409 Controller Capabilities/Features 00:15:06.409 ================================ 00:15:06.409 Vendor ID: 0000 00:15:06.409 Subsystem Vendor ID: 0000 00:15:06.409 Serial Number: 4a3a583892ff17092783 00:15:06.409 Model Number: Linux 00:15:06.409 Firmware Version: 6.8.9-20 00:15:06.409 Recommended Arb Burst: 0 00:15:06.409 IEEE OUI Identifier: 00 00 00 00:15:06.409 Multi-path I/O 00:15:06.409 May have multiple subsystem ports: No 00:15:06.409 May have multiple controllers: No 00:15:06.409 Associated with SR-IOV VF: No 00:15:06.409 Max Data Transfer Size: Unlimited 00:15:06.409 Max Number of Namespaces: 0 00:15:06.409 Max Number of I/O Queues: 1024 00:15:06.409 NVMe Specification Version (VS): 1.3 00:15:06.409 NVMe Specification Version (Identify): 1.3 00:15:06.409 Maximum Queue Entries: 1024 00:15:06.409 Contiguous Queues Required: No 00:15:06.409 Arbitration Mechanisms Supported 00:15:06.409 Weighted Round Robin: Not Supported 00:15:06.409 Vendor Specific: Not Supported 00:15:06.409 Reset Timeout: 7500 ms 00:15:06.409 Doorbell Stride: 4 bytes 00:15:06.409 NVM Subsystem Reset: Not Supported 00:15:06.409 Command Sets Supported 00:15:06.409 NVM Command Set: Supported 00:15:06.409 Boot Partition: Not Supported 00:15:06.409 Memory Page Size Minimum: 4096 bytes 00:15:06.409 Memory Page Size Maximum: 4096 bytes 00:15:06.409 Persistent Memory Region: Not Supported 00:15:06.409 Optional Asynchronous Events Supported 00:15:06.409 Namespace Attribute Notices: Not Supported 00:15:06.409 Firmware Activation Notices: Not Supported 00:15:06.409 ANA Change Notices: Not Supported 00:15:06.409 PLE Aggregate Log Change Notices: Not Supported 00:15:06.409 LBA Status Info Alert Notices: Not Supported 00:15:06.409 EGE Aggregate Log Change Notices: Not Supported 00:15:06.409 Normal NVM Subsystem Shutdown event: Not Supported 00:15:06.409 Zone Descriptor Change Notices: Not Supported 00:15:06.409 Discovery Log Change Notices: Supported 00:15:06.409 Controller Attributes 00:15:06.409 128-bit Host Identifier: Not Supported 00:15:06.409 Non-Operational Permissive Mode: Not Supported 00:15:06.409 NVM Sets: Not Supported 00:15:06.409 Read Recovery Levels: Not Supported 00:15:06.409 Endurance Groups: Not Supported 00:15:06.409 Predictable Latency Mode: Not Supported 00:15:06.409 Traffic Based Keep ALive: Not Supported 00:15:06.409 Namespace Granularity: Not Supported 00:15:06.409 SQ Associations: Not Supported 00:15:06.409 UUID List: Not Supported 00:15:06.409 Multi-Domain Subsystem: Not Supported 00:15:06.409 Fixed Capacity Management: Not Supported 00:15:06.409 Variable Capacity Management: Not Supported 00:15:06.409 Delete Endurance Group: Not Supported 00:15:06.409 Delete NVM Set: Not Supported 00:15:06.409 Extended LBA Formats Supported: Not Supported 00:15:06.409 Flexible Data Placement Supported: Not Supported 00:15:06.409 00:15:06.409 Controller Memory Buffer Support 00:15:06.409 ================================ 00:15:06.409 Supported: No 00:15:06.409 00:15:06.409 Persistent Memory Region Support 00:15:06.409 ================================ 00:15:06.409 Supported: No 00:15:06.409 00:15:06.409 Admin Command Set Attributes 00:15:06.409 ============================ 00:15:06.409 Security Send/Receive: Not Supported 00:15:06.409 Format NVM: Not Supported 00:15:06.409 Firmware Activate/Download: Not Supported 00:15:06.409 Namespace Management: Not Supported 00:15:06.409 Device Self-Test: Not Supported 00:15:06.409 Directives: Not Supported 00:15:06.409 NVMe-MI: Not Supported 00:15:06.410 Virtualization Management: Not Supported 00:15:06.410 Doorbell Buffer Config: Not Supported 00:15:06.410 Get LBA Status Capability: Not Supported 00:15:06.410 Command & Feature Lockdown Capability: Not Supported 00:15:06.410 Abort Command Limit: 1 00:15:06.410 Async Event Request Limit: 1 00:15:06.410 Number of Firmware Slots: N/A 00:15:06.410 Firmware Slot 1 Read-Only: N/A 00:15:06.410 Firmware Activation Without Reset: N/A 00:15:06.410 Multiple Update Detection Support: N/A 00:15:06.410 Firmware Update Granularity: No Information Provided 00:15:06.410 Per-Namespace SMART Log: No 00:15:06.410 Asymmetric Namespace Access Log Page: Not Supported 00:15:06.410 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:15:06.410 Command Effects Log Page: Not Supported 00:15:06.410 Get Log Page Extended Data: Supported 00:15:06.410 Telemetry Log Pages: Not Supported 00:15:06.410 Persistent Event Log Pages: Not Supported 00:15:06.410 Supported Log Pages Log Page: May Support 00:15:06.410 Commands Supported & Effects Log Page: Not Supported 00:15:06.410 Feature Identifiers & Effects Log Page:May Support 00:15:06.410 NVMe-MI Commands & Effects Log Page: May Support 00:15:06.410 Data Area 4 for Telemetry Log: Not Supported 00:15:06.410 Error Log Page Entries Supported: 1 00:15:06.410 Keep Alive: Not Supported 00:15:06.410 00:15:06.410 NVM Command Set Attributes 00:15:06.410 ========================== 00:15:06.410 Submission Queue Entry Size 00:15:06.410 Max: 1 00:15:06.410 Min: 1 00:15:06.410 Completion Queue Entry Size 00:15:06.410 Max: 1 00:15:06.410 Min: 1 00:15:06.410 Number of Namespaces: 0 00:15:06.410 Compare Command: Not Supported 00:15:06.410 Write Uncorrectable Command: Not Supported 00:15:06.410 Dataset Management Command: Not Supported 00:15:06.410 Write Zeroes Command: Not Supported 00:15:06.410 Set Features Save Field: Not Supported 00:15:06.410 Reservations: Not Supported 00:15:06.410 Timestamp: Not Supported 00:15:06.410 Copy: Not Supported 00:15:06.410 Volatile Write Cache: Not Present 00:15:06.410 Atomic Write Unit (Normal): 1 00:15:06.410 Atomic Write Unit (PFail): 1 00:15:06.410 Atomic Compare & Write Unit: 1 00:15:06.410 Fused Compare & Write: Not Supported 00:15:06.410 Scatter-Gather List 00:15:06.410 SGL Command Set: Supported 00:15:06.410 SGL Keyed: Not Supported 00:15:06.410 SGL Bit Bucket Descriptor: Not Supported 00:15:06.410 SGL Metadata Pointer: Not Supported 00:15:06.410 Oversized SGL: Not Supported 00:15:06.410 SGL Metadata Address: Not Supported 00:15:06.410 SGL Offset: Supported 00:15:06.410 Transport SGL Data Block: Not Supported 00:15:06.410 Replay Protected Memory Block: Not Supported 00:15:06.410 00:15:06.410 Firmware Slot Information 00:15:06.410 ========================= 00:15:06.410 Active slot: 0 00:15:06.410 00:15:06.410 00:15:06.410 Error Log 00:15:06.410 ========= 00:15:06.410 00:15:06.410 Active Namespaces 00:15:06.410 ================= 00:15:06.410 Discovery Log Page 00:15:06.410 ================== 00:15:06.410 Generation Counter: 2 00:15:06.410 Number of Records: 2 00:15:06.410 Record Format: 0 00:15:06.410 00:15:06.410 Discovery Log Entry 0 00:15:06.410 ---------------------- 00:15:06.410 Transport Type: 3 (TCP) 00:15:06.410 Address Family: 1 (IPv4) 00:15:06.410 Subsystem Type: 3 (Current Discovery Subsystem) 00:15:06.410 Entry Flags: 00:15:06.410 Duplicate Returned Information: 0 00:15:06.410 Explicit Persistent Connection Support for Discovery: 0 00:15:06.410 Transport Requirements: 00:15:06.410 Secure Channel: Not Specified 00:15:06.410 Port ID: 1 (0x0001) 00:15:06.410 Controller ID: 65535 (0xffff) 00:15:06.410 Admin Max SQ Size: 32 00:15:06.410 Transport Service Identifier: 4420 00:15:06.410 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:15:06.410 Transport Address: 10.0.0.1 00:15:06.410 Discovery Log Entry 1 00:15:06.410 ---------------------- 00:15:06.410 Transport Type: 3 (TCP) 00:15:06.410 Address Family: 1 (IPv4) 00:15:06.410 Subsystem Type: 2 (NVM Subsystem) 00:15:06.410 Entry Flags: 00:15:06.410 Duplicate Returned Information: 0 00:15:06.410 Explicit Persistent Connection Support for Discovery: 0 00:15:06.410 Transport Requirements: 00:15:06.410 Secure Channel: Not Specified 00:15:06.410 Port ID: 1 (0x0001) 00:15:06.410 Controller ID: 65535 (0xffff) 00:15:06.410 Admin Max SQ Size: 32 00:15:06.410 Transport Service Identifier: 4420 00:15:06.410 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:15:06.410 Transport Address: 10.0.0.1 00:15:06.410 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:15:06.670 get_feature(0x01) failed 00:15:06.670 get_feature(0x02) failed 00:15:06.670 get_feature(0x04) failed 00:15:06.671 ===================================================== 00:15:06.671 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:15:06.671 ===================================================== 00:15:06.671 Controller Capabilities/Features 00:15:06.671 ================================ 00:15:06.671 Vendor ID: 0000 00:15:06.671 Subsystem Vendor ID: 0000 00:15:06.671 Serial Number: c15b0371f7725e41bcb9 00:15:06.671 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:15:06.671 Firmware Version: 6.8.9-20 00:15:06.671 Recommended Arb Burst: 6 00:15:06.671 IEEE OUI Identifier: 00 00 00 00:15:06.671 Multi-path I/O 00:15:06.671 May have multiple subsystem ports: Yes 00:15:06.671 May have multiple controllers: Yes 00:15:06.671 Associated with SR-IOV VF: No 00:15:06.671 Max Data Transfer Size: Unlimited 00:15:06.671 Max Number of Namespaces: 1024 00:15:06.671 Max Number of I/O Queues: 128 00:15:06.671 NVMe Specification Version (VS): 1.3 00:15:06.671 NVMe Specification Version (Identify): 1.3 00:15:06.671 Maximum Queue Entries: 1024 00:15:06.671 Contiguous Queues Required: No 00:15:06.671 Arbitration Mechanisms Supported 00:15:06.671 Weighted Round Robin: Not Supported 00:15:06.671 Vendor Specific: Not Supported 00:15:06.671 Reset Timeout: 7500 ms 00:15:06.671 Doorbell Stride: 4 bytes 00:15:06.671 NVM Subsystem Reset: Not Supported 00:15:06.671 Command Sets Supported 00:15:06.671 NVM Command Set: Supported 00:15:06.671 Boot Partition: Not Supported 00:15:06.671 Memory Page Size Minimum: 4096 bytes 00:15:06.671 Memory Page Size Maximum: 4096 bytes 00:15:06.671 Persistent Memory Region: Not Supported 00:15:06.671 Optional Asynchronous Events Supported 00:15:06.671 Namespace Attribute Notices: Supported 00:15:06.671 Firmware Activation Notices: Not Supported 00:15:06.671 ANA Change Notices: Supported 00:15:06.671 PLE Aggregate Log Change Notices: Not Supported 00:15:06.671 LBA Status Info Alert Notices: Not Supported 00:15:06.671 EGE Aggregate Log Change Notices: Not Supported 00:15:06.671 Normal NVM Subsystem Shutdown event: Not Supported 00:15:06.671 Zone Descriptor Change Notices: Not Supported 00:15:06.671 Discovery Log Change Notices: Not Supported 00:15:06.671 Controller Attributes 00:15:06.671 128-bit Host Identifier: Supported 00:15:06.671 Non-Operational Permissive Mode: Not Supported 00:15:06.671 NVM Sets: Not Supported 00:15:06.671 Read Recovery Levels: Not Supported 00:15:06.671 Endurance Groups: Not Supported 00:15:06.671 Predictable Latency Mode: Not Supported 00:15:06.671 Traffic Based Keep ALive: Supported 00:15:06.671 Namespace Granularity: Not Supported 00:15:06.671 SQ Associations: Not Supported 00:15:06.671 UUID List: Not Supported 00:15:06.671 Multi-Domain Subsystem: Not Supported 00:15:06.671 Fixed Capacity Management: Not Supported 00:15:06.671 Variable Capacity Management: Not Supported 00:15:06.671 Delete Endurance Group: Not Supported 00:15:06.671 Delete NVM Set: Not Supported 00:15:06.671 Extended LBA Formats Supported: Not Supported 00:15:06.671 Flexible Data Placement Supported: Not Supported 00:15:06.671 00:15:06.671 Controller Memory Buffer Support 00:15:06.671 ================================ 00:15:06.671 Supported: No 00:15:06.671 00:15:06.671 Persistent Memory Region Support 00:15:06.671 ================================ 00:15:06.671 Supported: No 00:15:06.671 00:15:06.671 Admin Command Set Attributes 00:15:06.671 ============================ 00:15:06.671 Security Send/Receive: Not Supported 00:15:06.671 Format NVM: Not Supported 00:15:06.671 Firmware Activate/Download: Not Supported 00:15:06.671 Namespace Management: Not Supported 00:15:06.671 Device Self-Test: Not Supported 00:15:06.671 Directives: Not Supported 00:15:06.671 NVMe-MI: Not Supported 00:15:06.671 Virtualization Management: Not Supported 00:15:06.671 Doorbell Buffer Config: Not Supported 00:15:06.671 Get LBA Status Capability: Not Supported 00:15:06.671 Command & Feature Lockdown Capability: Not Supported 00:15:06.671 Abort Command Limit: 4 00:15:06.671 Async Event Request Limit: 4 00:15:06.671 Number of Firmware Slots: N/A 00:15:06.671 Firmware Slot 1 Read-Only: N/A 00:15:06.671 Firmware Activation Without Reset: N/A 00:15:06.671 Multiple Update Detection Support: N/A 00:15:06.671 Firmware Update Granularity: No Information Provided 00:15:06.671 Per-Namespace SMART Log: Yes 00:15:06.671 Asymmetric Namespace Access Log Page: Supported 00:15:06.671 ANA Transition Time : 10 sec 00:15:06.671 00:15:06.671 Asymmetric Namespace Access Capabilities 00:15:06.671 ANA Optimized State : Supported 00:15:06.671 ANA Non-Optimized State : Supported 00:15:06.671 ANA Inaccessible State : Supported 00:15:06.671 ANA Persistent Loss State : Supported 00:15:06.671 ANA Change State : Supported 00:15:06.671 ANAGRPID is not changed : No 00:15:06.671 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:15:06.671 00:15:06.671 ANA Group Identifier Maximum : 128 00:15:06.671 Number of ANA Group Identifiers : 128 00:15:06.671 Max Number of Allowed Namespaces : 1024 00:15:06.671 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:15:06.671 Command Effects Log Page: Supported 00:15:06.671 Get Log Page Extended Data: Supported 00:15:06.671 Telemetry Log Pages: Not Supported 00:15:06.671 Persistent Event Log Pages: Not Supported 00:15:06.671 Supported Log Pages Log Page: May Support 00:15:06.671 Commands Supported & Effects Log Page: Not Supported 00:15:06.671 Feature Identifiers & Effects Log Page:May Support 00:15:06.671 NVMe-MI Commands & Effects Log Page: May Support 00:15:06.671 Data Area 4 for Telemetry Log: Not Supported 00:15:06.671 Error Log Page Entries Supported: 128 00:15:06.671 Keep Alive: Supported 00:15:06.671 Keep Alive Granularity: 1000 ms 00:15:06.671 00:15:06.671 NVM Command Set Attributes 00:15:06.671 ========================== 00:15:06.671 Submission Queue Entry Size 00:15:06.671 Max: 64 00:15:06.671 Min: 64 00:15:06.671 Completion Queue Entry Size 00:15:06.671 Max: 16 00:15:06.671 Min: 16 00:15:06.671 Number of Namespaces: 1024 00:15:06.671 Compare Command: Not Supported 00:15:06.671 Write Uncorrectable Command: Not Supported 00:15:06.672 Dataset Management Command: Supported 00:15:06.672 Write Zeroes Command: Supported 00:15:06.672 Set Features Save Field: Not Supported 00:15:06.672 Reservations: Not Supported 00:15:06.672 Timestamp: Not Supported 00:15:06.672 Copy: Not Supported 00:15:06.672 Volatile Write Cache: Present 00:15:06.672 Atomic Write Unit (Normal): 1 00:15:06.672 Atomic Write Unit (PFail): 1 00:15:06.672 Atomic Compare & Write Unit: 1 00:15:06.672 Fused Compare & Write: Not Supported 00:15:06.672 Scatter-Gather List 00:15:06.672 SGL Command Set: Supported 00:15:06.672 SGL Keyed: Not Supported 00:15:06.672 SGL Bit Bucket Descriptor: Not Supported 00:15:06.672 SGL Metadata Pointer: Not Supported 00:15:06.672 Oversized SGL: Not Supported 00:15:06.672 SGL Metadata Address: Not Supported 00:15:06.672 SGL Offset: Supported 00:15:06.672 Transport SGL Data Block: Not Supported 00:15:06.672 Replay Protected Memory Block: Not Supported 00:15:06.672 00:15:06.672 Firmware Slot Information 00:15:06.672 ========================= 00:15:06.672 Active slot: 0 00:15:06.672 00:15:06.672 Asymmetric Namespace Access 00:15:06.672 =========================== 00:15:06.672 Change Count : 0 00:15:06.672 Number of ANA Group Descriptors : 1 00:15:06.672 ANA Group Descriptor : 0 00:15:06.672 ANA Group ID : 1 00:15:06.672 Number of NSID Values : 1 00:15:06.672 Change Count : 0 00:15:06.672 ANA State : 1 00:15:06.672 Namespace Identifier : 1 00:15:06.672 00:15:06.672 Commands Supported and Effects 00:15:06.672 ============================== 00:15:06.672 Admin Commands 00:15:06.672 -------------- 00:15:06.672 Get Log Page (02h): Supported 00:15:06.672 Identify (06h): Supported 00:15:06.672 Abort (08h): Supported 00:15:06.672 Set Features (09h): Supported 00:15:06.672 Get Features (0Ah): Supported 00:15:06.672 Asynchronous Event Request (0Ch): Supported 00:15:06.672 Keep Alive (18h): Supported 00:15:06.672 I/O Commands 00:15:06.672 ------------ 00:15:06.672 Flush (00h): Supported 00:15:06.672 Write (01h): Supported LBA-Change 00:15:06.672 Read (02h): Supported 00:15:06.672 Write Zeroes (08h): Supported LBA-Change 00:15:06.672 Dataset Management (09h): Supported 00:15:06.672 00:15:06.672 Error Log 00:15:06.672 ========= 00:15:06.672 Entry: 0 00:15:06.672 Error Count: 0x3 00:15:06.672 Submission Queue Id: 0x0 00:15:06.672 Command Id: 0x5 00:15:06.672 Phase Bit: 0 00:15:06.672 Status Code: 0x2 00:15:06.672 Status Code Type: 0x0 00:15:06.672 Do Not Retry: 1 00:15:06.672 Error Location: 0x28 00:15:06.672 LBA: 0x0 00:15:06.672 Namespace: 0x0 00:15:06.672 Vendor Log Page: 0x0 00:15:06.672 ----------- 00:15:06.672 Entry: 1 00:15:06.672 Error Count: 0x2 00:15:06.672 Submission Queue Id: 0x0 00:15:06.672 Command Id: 0x5 00:15:06.672 Phase Bit: 0 00:15:06.672 Status Code: 0x2 00:15:06.672 Status Code Type: 0x0 00:15:06.672 Do Not Retry: 1 00:15:06.672 Error Location: 0x28 00:15:06.672 LBA: 0x0 00:15:06.672 Namespace: 0x0 00:15:06.672 Vendor Log Page: 0x0 00:15:06.672 ----------- 00:15:06.672 Entry: 2 00:15:06.672 Error Count: 0x1 00:15:06.672 Submission Queue Id: 0x0 00:15:06.672 Command Id: 0x4 00:15:06.672 Phase Bit: 0 00:15:06.672 Status Code: 0x2 00:15:06.672 Status Code Type: 0x0 00:15:06.672 Do Not Retry: 1 00:15:06.672 Error Location: 0x28 00:15:06.672 LBA: 0x0 00:15:06.672 Namespace: 0x0 00:15:06.672 Vendor Log Page: 0x0 00:15:06.672 00:15:06.672 Number of Queues 00:15:06.672 ================ 00:15:06.672 Number of I/O Submission Queues: 128 00:15:06.672 Number of I/O Completion Queues: 128 00:15:06.672 00:15:06.672 ZNS Specific Controller Data 00:15:06.672 ============================ 00:15:06.672 Zone Append Size Limit: 0 00:15:06.672 00:15:06.672 00:15:06.672 Active Namespaces 00:15:06.672 ================= 00:15:06.672 get_feature(0x05) failed 00:15:06.672 Namespace ID:1 00:15:06.672 Command Set Identifier: NVM (00h) 00:15:06.672 Deallocate: Supported 00:15:06.672 Deallocated/Unwritten Error: Not Supported 00:15:06.672 Deallocated Read Value: Unknown 00:15:06.672 Deallocate in Write Zeroes: Not Supported 00:15:06.672 Deallocated Guard Field: 0xFFFF 00:15:06.672 Flush: Supported 00:15:06.672 Reservation: Not Supported 00:15:06.672 Namespace Sharing Capabilities: Multiple Controllers 00:15:06.672 Size (in LBAs): 1310720 (5GiB) 00:15:06.672 Capacity (in LBAs): 1310720 (5GiB) 00:15:06.672 Utilization (in LBAs): 1310720 (5GiB) 00:15:06.672 UUID: 3f0eeede-4949-4dd6-970f-08ea663d692d 00:15:06.672 Thin Provisioning: Not Supported 00:15:06.672 Per-NS Atomic Units: Yes 00:15:06.672 Atomic Boundary Size (Normal): 0 00:15:06.672 Atomic Boundary Size (PFail): 0 00:15:06.672 Atomic Boundary Offset: 0 00:15:06.672 NGUID/EUI64 Never Reused: No 00:15:06.672 ANA group ID: 1 00:15:06.672 Namespace Write Protected: No 00:15:06.672 Number of LBA Formats: 1 00:15:06.672 Current LBA Format: LBA Format #00 00:15:06.672 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:15:06.672 00:15:06.672 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:15:06.672 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:06.672 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:15:06.672 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:06.672 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:15:06.672 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:06.672 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:06.672 rmmod nvme_tcp 00:15:06.672 rmmod nvme_fabrics 00:15:06.672 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:06.672 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:15:06.672 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:15:06.672 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:15:06.672 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:06.672 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:06.672 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:06.672 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:15:06.672 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:15:06.672 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:06.672 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:15:06.672 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:06.672 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:06.672 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:06.933 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:06.933 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:06.933 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:06.933 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:06.933 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:06.933 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:06.933 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:06.933 19:49:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:06.933 19:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:06.933 19:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:06.933 19:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:06.933 19:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:06.933 19:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:06.933 19:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:06.933 19:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:06.933 19:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:06.933 19:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:15:06.933 19:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:15:06.933 19:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:15:06.933 19:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:15:06.933 19:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:15:06.933 19:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:15:06.933 19:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:15:06.933 19:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:15:06.933 19:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:15:06.933 19:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:15:07.192 19:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:07.761 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:07.761 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:07.761 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:07.761 00:15:07.761 real 0m2.807s 00:15:07.761 user 0m0.845s 00:15:07.761 sys 0m1.167s 00:15:07.761 19:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:07.761 19:49:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.761 ************************************ 00:15:07.761 END TEST nvmf_identify_kernel_target 00:15:07.761 ************************************ 00:15:08.021 19:49:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:15:08.021 19:49:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:08.021 19:49:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:08.021 19:49:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:08.021 ************************************ 00:15:08.021 START TEST nvmf_auth_host 00:15:08.021 ************************************ 00:15:08.021 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:15:08.021 * Looking for test storage... 00:15:08.021 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:08.021 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:08.021 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:15:08.021 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:08.021 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:08.021 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:08.021 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:08.021 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:08.021 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:15:08.021 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:08.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:08.022 --rc genhtml_branch_coverage=1 00:15:08.022 --rc genhtml_function_coverage=1 00:15:08.022 --rc genhtml_legend=1 00:15:08.022 --rc geninfo_all_blocks=1 00:15:08.022 --rc geninfo_unexecuted_blocks=1 00:15:08.022 00:15:08.022 ' 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:08.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:08.022 --rc genhtml_branch_coverage=1 00:15:08.022 --rc genhtml_function_coverage=1 00:15:08.022 --rc genhtml_legend=1 00:15:08.022 --rc geninfo_all_blocks=1 00:15:08.022 --rc geninfo_unexecuted_blocks=1 00:15:08.022 00:15:08.022 ' 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:08.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:08.022 --rc genhtml_branch_coverage=1 00:15:08.022 --rc genhtml_function_coverage=1 00:15:08.022 --rc genhtml_legend=1 00:15:08.022 --rc geninfo_all_blocks=1 00:15:08.022 --rc geninfo_unexecuted_blocks=1 00:15:08.022 00:15:08.022 ' 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:08.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:08.022 --rc genhtml_branch_coverage=1 00:15:08.022 --rc genhtml_function_coverage=1 00:15:08.022 --rc genhtml_legend=1 00:15:08.022 --rc geninfo_all_blocks=1 00:15:08.022 --rc geninfo_unexecuted_blocks=1 00:15:08.022 00:15:08.022 ' 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=91838eb1-5852-43eb-90b2-09876f360ab2 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:08.022 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:08.022 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:08.023 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:08.023 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:08.023 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:08.023 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:08.023 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:08.023 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:08.023 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:08.023 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:08.023 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:08.023 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:08.023 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:08.023 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:08.023 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:08.023 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:08.023 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:08.023 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:08.023 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:08.023 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:08.023 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:08.023 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:08.023 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:08.023 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:08.023 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:08.023 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:08.023 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:08.023 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:08.023 Cannot find device "nvmf_init_br" 00:15:08.023 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:15:08.023 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:08.023 Cannot find device "nvmf_init_br2" 00:15:08.023 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:15:08.023 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:08.023 Cannot find device "nvmf_tgt_br" 00:15:08.023 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:15:08.023 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:08.023 Cannot find device "nvmf_tgt_br2" 00:15:08.023 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:15:08.023 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:08.023 Cannot find device "nvmf_init_br" 00:15:08.023 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:15:08.023 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:08.281 Cannot find device "nvmf_init_br2" 00:15:08.281 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:15:08.281 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:08.281 Cannot find device "nvmf_tgt_br" 00:15:08.281 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:15:08.281 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:08.281 Cannot find device "nvmf_tgt_br2" 00:15:08.281 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:15:08.281 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:08.281 Cannot find device "nvmf_br" 00:15:08.281 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:15:08.281 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:08.281 Cannot find device "nvmf_init_if" 00:15:08.281 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:15:08.281 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:08.281 Cannot find device "nvmf_init_if2" 00:15:08.281 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:15:08.281 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:08.281 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:08.281 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:15:08.281 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:08.281 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:08.281 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:15:08.281 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:08.281 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:08.281 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:08.281 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:08.282 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:08.282 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:08.282 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:08.282 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:08.282 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:08.282 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:08.282 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:08.282 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:08.282 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:08.282 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:08.282 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:08.282 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:08.282 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:08.282 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:08.282 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:08.282 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:08.282 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:08.282 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:08.282 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:08.282 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:08.282 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:08.282 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:08.282 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:08.282 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:08.282 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:08.282 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:08.282 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:08.282 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:08.282 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:08.282 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:08.282 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:15:08.282 00:15:08.282 --- 10.0.0.3 ping statistics --- 00:15:08.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:08.282 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:15:08.282 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:08.282 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:08.282 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.098 ms 00:15:08.282 00:15:08.282 --- 10.0.0.4 ping statistics --- 00:15:08.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:08.282 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:15:08.282 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:08.282 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:08.282 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:15:08.282 00:15:08.282 --- 10.0.0.1 ping statistics --- 00:15:08.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:08.282 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:15:08.282 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:08.282 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:08.282 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:15:08.282 00:15:08.282 --- 10.0.0.2 ping statistics --- 00:15:08.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:08.282 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:15:08.282 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:08.282 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:15:08.282 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:08.282 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:08.282 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:08.282 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:08.282 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:08.282 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:08.282 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:08.542 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:15:08.542 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:08.542 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:08.542 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:08.542 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=76838 00:15:08.542 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:15:08.542 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 76838 00:15:08.542 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 76838 ']' 00:15:08.542 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.542 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:08.542 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.542 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:08.542 19:49:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f3ba7362c62080c8cd301ab65ef40910 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.MA9 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f3ba7362c62080c8cd301ab65ef40910 0 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f3ba7362c62080c8cd301ab65ef40910 0 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f3ba7362c62080c8cd301ab65ef40910 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.MA9 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.MA9 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.MA9 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f28f55520e3d934a76a6a77cd9d61d20e474e88c803c814944dc099567e8e514 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.W1X 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f28f55520e3d934a76a6a77cd9d61d20e474e88c803c814944dc099567e8e514 3 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f28f55520e3d934a76a6a77cd9d61d20e474e88c803c814944dc099567e8e514 3 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f28f55520e3d934a76a6a77cd9d61d20e474e88c803c814944dc099567e8e514 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.W1X 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.W1X 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.W1X 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4431e5751c55f1d98b529501f2ec809bb54cad96a552fec1 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.JnG 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4431e5751c55f1d98b529501f2ec809bb54cad96a552fec1 0 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4431e5751c55f1d98b529501f2ec809bb54cad96a552fec1 0 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4431e5751c55f1d98b529501f2ec809bb54cad96a552fec1 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.JnG 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.JnG 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.JnG 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:15:09.486 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:09.487 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:15:09.487 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:15:09.487 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:15:09.487 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:09.487 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6d6c4ecdb5e555264e7a067a28dc058baa2e55fdb734f373 00:15:09.487 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:09.487 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.dez 00:15:09.487 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6d6c4ecdb5e555264e7a067a28dc058baa2e55fdb734f373 2 00:15:09.487 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6d6c4ecdb5e555264e7a067a28dc058baa2e55fdb734f373 2 00:15:09.487 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:15:09.487 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:09.487 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6d6c4ecdb5e555264e7a067a28dc058baa2e55fdb734f373 00:15:09.487 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:15:09.487 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:15:09.487 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.dez 00:15:09.487 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.dez 00:15:09.487 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.dez 00:15:09.487 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:15:09.487 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:15:09.487 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:09.487 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:15:09.487 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:15:09.487 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:15:09.487 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:09.487 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=88d0bad3cdb4fb1c8642f41025367c4f 00:15:09.487 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:09.487 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.sLf 00:15:09.487 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 88d0bad3cdb4fb1c8642f41025367c4f 1 00:15:09.487 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 88d0bad3cdb4fb1c8642f41025367c4f 1 00:15:09.487 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:15:09.487 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:09.487 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=88d0bad3cdb4fb1c8642f41025367c4f 00:15:09.487 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:15:09.487 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:15:09.748 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.sLf 00:15:09.748 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.sLf 00:15:09.748 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.sLf 00:15:09.748 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:15:09.748 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:15:09.748 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:09.748 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:15:09.748 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:15:09.748 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:15:09.748 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:09.748 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=aa48500987194db9b949dd0a6e896dab 00:15:09.748 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:09.748 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.r7a 00:15:09.748 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key aa48500987194db9b949dd0a6e896dab 1 00:15:09.748 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 aa48500987194db9b949dd0a6e896dab 1 00:15:09.748 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:15:09.748 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:09.748 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=aa48500987194db9b949dd0a6e896dab 00:15:09.748 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:15:09.748 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:15:09.748 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.r7a 00:15:09.748 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.r7a 00:15:09.748 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.r7a 00:15:09.748 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:15:09.748 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:15:09.748 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:09.748 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:15:09.748 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:15:09.748 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:15:09.748 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:09.748 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f6f1e1a47bfbea8636425f2a77ed9f19c53363128745ca09 00:15:09.748 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:09.748 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.won 00:15:09.748 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f6f1e1a47bfbea8636425f2a77ed9f19c53363128745ca09 2 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f6f1e1a47bfbea8636425f2a77ed9f19c53363128745ca09 2 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f6f1e1a47bfbea8636425f2a77ed9f19c53363128745ca09 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.won 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.won 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.won 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=621cd6a6bdebd24d823e3c440eb531eb 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.xAb 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 621cd6a6bdebd24d823e3c440eb531eb 0 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 621cd6a6bdebd24d823e3c440eb531eb 0 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=621cd6a6bdebd24d823e3c440eb531eb 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.xAb 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.xAb 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.xAb 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8d1a805d620193b8f119382e420ca0fb45d75a4d8ddf5f578402f82215a9b60f 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.sI6 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8d1a805d620193b8f119382e420ca0fb45d75a4d8ddf5f578402f82215a9b60f 3 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8d1a805d620193b8f119382e420ca0fb45d75a4d8ddf5f578402f82215a9b60f 3 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8d1a805d620193b8f119382e420ca0fb45d75a4d8ddf5f578402f82215a9b60f 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.sI6 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.sI6 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.sI6 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 76838 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 76838 ']' 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:09.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:09.749 19:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.MA9 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.W1X ]] 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.W1X 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.JnG 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.dez ]] 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.dez 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.sLf 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.r7a ]] 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.r7a 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.won 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.xAb ]] 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.xAb 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.sI6 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:15:10.012 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:15:10.272 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:15:10.272 19:49:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:10.531 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:10.531 Waiting for block devices as requested 00:15:10.531 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:10.531 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:11.102 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:15:11.102 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:15:11.102 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:15:11.102 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:15:11.102 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:11.102 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:11.102 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:15:11.102 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:15:11.102 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:15:11.102 No valid GPT data, bailing 00:15:11.102 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:15:11.102 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:15:11.102 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:15:11.102 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:15:11.102 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:15:11.102 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:15:11.102 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:15:11.102 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:15:11.102 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:11.102 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:11.102 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:15:11.102 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:15:11.102 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:15:11.102 No valid GPT data, bailing 00:15:11.102 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:15:11.102 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:15:11.102 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:15:11.102 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:15:11.102 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:15:11.102 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:15:11.102 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:15:11.102 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:15:11.102 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:11.102 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:11.102 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:15:11.102 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:15:11.102 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:15:11.361 No valid GPT data, bailing 00:15:11.361 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:15:11.361 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:15:11.361 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:15:11.361 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:15:11.361 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:15:11.361 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:15:11.361 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:15:11.361 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:15:11.361 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:11.361 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:11.362 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:15:11.362 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:15:11.362 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:15:11.362 No valid GPT data, bailing 00:15:11.362 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:15:11.362 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:15:11.362 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:15:11.362 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:15:11.362 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:15:11.362 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:15:11.362 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:15:11.362 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:15:11.362 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:15:11.362 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:15:11.362 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:15:11.362 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:15:11.362 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:15:11.362 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:15:11.362 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:15:11.362 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:15:11.362 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:15:11.362 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid=91838eb1-5852-43eb-90b2-09876f360ab2 -a 10.0.0.1 -t tcp -s 4420 00:15:11.362 00:15:11.362 Discovery Log Number of Records 2, Generation counter 2 00:15:11.362 =====Discovery Log Entry 0====== 00:15:11.362 trtype: tcp 00:15:11.362 adrfam: ipv4 00:15:11.362 subtype: current discovery subsystem 00:15:11.362 treq: not specified, sq flow control disable supported 00:15:11.362 portid: 1 00:15:11.362 trsvcid: 4420 00:15:11.362 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:11.362 traddr: 10.0.0.1 00:15:11.362 eflags: none 00:15:11.362 sectype: none 00:15:11.362 =====Discovery Log Entry 1====== 00:15:11.362 trtype: tcp 00:15:11.362 adrfam: ipv4 00:15:11.362 subtype: nvme subsystem 00:15:11.362 treq: not specified, sq flow control disable supported 00:15:11.362 portid: 1 00:15:11.362 trsvcid: 4420 00:15:11.362 subnqn: nqn.2024-02.io.spdk:cnode0 00:15:11.362 traddr: 10.0.0.1 00:15:11.362 eflags: none 00:15:11.362 sectype: none 00:15:11.362 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:15:11.362 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:15:11.362 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:15:11.362 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:15:11.362 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:11.362 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:11.362 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:11.362 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:11.362 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQzMWU1NzUxYzU1ZjFkOThiNTI5NTAxZjJlYzgwOWJiNTRjYWQ5NmE1NTJmZWMx4eq7yQ==: 00:15:11.362 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmQ2YzRlY2RiNWU1NTUyNjRlN2EwNjdhMjhkYzA1OGJhYTJlNTVmZGI3MzRmMzczCjrNXg==: 00:15:11.362 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:11.362 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:11.623 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQzMWU1NzUxYzU1ZjFkOThiNTI5NTAxZjJlYzgwOWJiNTRjYWQ5NmE1NTJmZWMx4eq7yQ==: 00:15:11.623 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmQ2YzRlY2RiNWU1NTUyNjRlN2EwNjdhMjhkYzA1OGJhYTJlNTVmZGI3MzRmMzczCjrNXg==: ]] 00:15:11.623 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmQ2YzRlY2RiNWU1NTUyNjRlN2EwNjdhMjhkYzA1OGJhYTJlNTVmZGI3MzRmMzczCjrNXg==: 00:15:11.623 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:15:11.623 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:15:11.623 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:15:11.623 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:11.623 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:15:11.623 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:11.623 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:15:11.623 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:11.623 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:11.623 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:11.623 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:11.623 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.623 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:11.623 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.623 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:11.623 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:11.623 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:11.623 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:11.623 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:11.623 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:11.623 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:11.623 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:11.623 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:11.623 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:11.623 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:11.623 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:11.623 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.623 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:11.623 nvme0n1 00:15:11.623 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.623 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:11.623 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.623 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:11.623 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:11.623 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.623 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.623 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:11.623 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.623 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:11.623 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.623 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:15:11.623 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:11.623 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:11.623 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:15:11.623 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:11.623 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:11.623 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:11.623 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:11.623 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjNiYTczNjJjNjIwODBjOGNkMzAxYWI2NWVmNDA5MTCU46W7: 00:15:11.623 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjI4ZjU1NTIwZTNkOTM0YTc2YTZhNzdjZDlkNjFkMjBlNDc0ZTg4YzgwM2M4MTQ5NDRkYzA5OTU2N2U4ZTUxNErXPwI=: 00:15:11.623 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:11.623 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:11.624 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjNiYTczNjJjNjIwODBjOGNkMzAxYWI2NWVmNDA5MTCU46W7: 00:15:11.624 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjI4ZjU1NTIwZTNkOTM0YTc2YTZhNzdjZDlkNjFkMjBlNDc0ZTg4YzgwM2M4MTQ5NDRkYzA5OTU2N2U4ZTUxNErXPwI=: ]] 00:15:11.624 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjI4ZjU1NTIwZTNkOTM0YTc2YTZhNzdjZDlkNjFkMjBlNDc0ZTg4YzgwM2M4MTQ5NDRkYzA5OTU2N2U4ZTUxNErXPwI=: 00:15:11.624 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:15:11.624 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:11.624 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:11.624 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:11.624 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:11.624 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:11.624 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:11.624 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.624 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:11.624 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.624 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:11.624 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:11.624 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:11.624 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:11.624 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:11.624 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:11.624 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:11.624 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:11.624 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:11.624 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:11.624 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:11.624 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:11.624 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.624 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:11.883 nvme0n1 00:15:11.883 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.883 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:11.883 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:11.883 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.883 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:11.883 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.883 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.883 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:11.883 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.883 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:11.883 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.883 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:11.883 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:15:11.883 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:11.884 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:11.884 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:11.884 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:11.884 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQzMWU1NzUxYzU1ZjFkOThiNTI5NTAxZjJlYzgwOWJiNTRjYWQ5NmE1NTJmZWMx4eq7yQ==: 00:15:11.884 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmQ2YzRlY2RiNWU1NTUyNjRlN2EwNjdhMjhkYzA1OGJhYTJlNTVmZGI3MzRmMzczCjrNXg==: 00:15:11.884 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:11.884 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:11.884 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQzMWU1NzUxYzU1ZjFkOThiNTI5NTAxZjJlYzgwOWJiNTRjYWQ5NmE1NTJmZWMx4eq7yQ==: 00:15:11.884 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmQ2YzRlY2RiNWU1NTUyNjRlN2EwNjdhMjhkYzA1OGJhYTJlNTVmZGI3MzRmMzczCjrNXg==: ]] 00:15:11.884 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmQ2YzRlY2RiNWU1NTUyNjRlN2EwNjdhMjhkYzA1OGJhYTJlNTVmZGI3MzRmMzczCjrNXg==: 00:15:11.884 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:15:11.884 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:11.884 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:11.884 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:11.884 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:11.884 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:11.884 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:11.884 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.884 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:11.884 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.884 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:11.884 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:11.884 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:11.884 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:11.884 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:11.884 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:11.884 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:11.884 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:11.884 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:11.884 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:11.884 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:11.884 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:11.884 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.884 19:49:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:11.884 nvme0n1 00:15:11.884 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.884 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:11.884 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:11.884 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.884 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:11.884 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.884 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.884 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:11.884 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.884 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODhkMGJhZDNjZGI0ZmIxYzg2NDJmNDEwMjUzNjdjNGbNjTNE: 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWE0ODUwMDk4NzE5NGRiOWI5NDlkZDBhNmU4OTZkYWJsq0QM: 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODhkMGJhZDNjZGI0ZmIxYzg2NDJmNDEwMjUzNjdjNGbNjTNE: 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWE0ODUwMDk4NzE5NGRiOWI5NDlkZDBhNmU4OTZkYWJsq0QM: ]] 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWE0ODUwMDk4NzE5NGRiOWI5NDlkZDBhNmU4OTZkYWJsq0QM: 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:12.144 nvme0n1 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjZmMWUxYTQ3YmZiZWE4NjM2NDI1ZjJhNzdlZDlmMTljNTMzNjMxMjg3NDVjYTA53JTdpw==: 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjIxY2Q2YTZiZGViZDI0ZDgyM2UzYzQ0MGViNTMxZWJvUMtB: 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjZmMWUxYTQ3YmZiZWE4NjM2NDI1ZjJhNzdlZDlmMTljNTMzNjMxMjg3NDVjYTA53JTdpw==: 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjIxY2Q2YTZiZGViZDI0ZDgyM2UzYzQ0MGViNTMxZWJvUMtB: ]] 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjIxY2Q2YTZiZGViZDI0ZDgyM2UzYzQ0MGViNTMxZWJvUMtB: 00:15:12.144 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:15:12.145 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:12.145 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:12.145 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:12.145 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:12.145 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:12.145 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:12.145 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.145 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:12.145 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.145 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:12.145 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:12.145 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:12.145 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:12.145 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:12.145 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:12.145 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:12.145 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:12.145 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:12.145 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:12.145 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:12.145 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:12.145 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.145 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:12.405 nvme0n1 00:15:12.405 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.405 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:12.405 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:12.405 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.405 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:12.405 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.405 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.405 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:12.405 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.405 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:12.405 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.405 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:12.405 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:15:12.405 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:12.405 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:12.405 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:12.405 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:12.405 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQxYTgwNWQ2MjAxOTNiOGYxMTkzODJlNDIwY2EwZmI0NWQ3NWE0ZDhkZGY1ZjU3ODQwMmY4MjIxNWE5YjYwZjfBZwo=: 00:15:12.405 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:12.405 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:12.405 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:12.405 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQxYTgwNWQ2MjAxOTNiOGYxMTkzODJlNDIwY2EwZmI0NWQ3NWE0ZDhkZGY1ZjU3ODQwMmY4MjIxNWE5YjYwZjfBZwo=: 00:15:12.405 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:12.405 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:15:12.405 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:12.405 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:12.405 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:12.405 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:12.405 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:12.405 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:12.405 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.405 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:12.405 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.405 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:12.405 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:12.405 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:12.406 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:12.406 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:12.406 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:12.406 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:12.406 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:12.406 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:12.406 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:12.406 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:12.406 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:12.406 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.406 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:12.406 nvme0n1 00:15:12.406 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.406 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:12.406 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:12.406 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.406 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:12.406 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.406 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.406 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:12.406 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.406 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:12.406 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.406 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:12.406 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:12.406 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:15:12.406 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:12.406 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:12.406 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:12.406 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:12.406 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjNiYTczNjJjNjIwODBjOGNkMzAxYWI2NWVmNDA5MTCU46W7: 00:15:12.406 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjI4ZjU1NTIwZTNkOTM0YTc2YTZhNzdjZDlkNjFkMjBlNDc0ZTg4YzgwM2M4MTQ5NDRkYzA5OTU2N2U4ZTUxNErXPwI=: 00:15:12.406 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:12.406 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:12.666 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjNiYTczNjJjNjIwODBjOGNkMzAxYWI2NWVmNDA5MTCU46W7: 00:15:12.666 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjI4ZjU1NTIwZTNkOTM0YTc2YTZhNzdjZDlkNjFkMjBlNDc0ZTg4YzgwM2M4MTQ5NDRkYzA5OTU2N2U4ZTUxNErXPwI=: ]] 00:15:12.666 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjI4ZjU1NTIwZTNkOTM0YTc2YTZhNzdjZDlkNjFkMjBlNDc0ZTg4YzgwM2M4MTQ5NDRkYzA5OTU2N2U4ZTUxNErXPwI=: 00:15:12.666 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:15:12.666 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:12.666 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:12.666 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:12.666 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:12.666 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:12.666 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:12.666 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.666 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:12.666 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.666 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:12.666 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:12.666 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:12.666 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:12.666 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:12.666 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:12.666 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:12.666 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:12.666 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:12.666 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:12.666 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:12.666 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:12.666 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.666 19:49:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:12.925 nvme0n1 00:15:12.925 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.925 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:12.925 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:12.925 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.925 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:12.925 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.925 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.925 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:12.925 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.925 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:12.925 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.925 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:12.925 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:15:12.926 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:12.926 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:12.926 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:12.926 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:12.926 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQzMWU1NzUxYzU1ZjFkOThiNTI5NTAxZjJlYzgwOWJiNTRjYWQ5NmE1NTJmZWMx4eq7yQ==: 00:15:12.926 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmQ2YzRlY2RiNWU1NTUyNjRlN2EwNjdhMjhkYzA1OGJhYTJlNTVmZGI3MzRmMzczCjrNXg==: 00:15:12.926 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:12.926 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:12.926 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQzMWU1NzUxYzU1ZjFkOThiNTI5NTAxZjJlYzgwOWJiNTRjYWQ5NmE1NTJmZWMx4eq7yQ==: 00:15:12.926 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmQ2YzRlY2RiNWU1NTUyNjRlN2EwNjdhMjhkYzA1OGJhYTJlNTVmZGI3MzRmMzczCjrNXg==: ]] 00:15:12.926 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmQ2YzRlY2RiNWU1NTUyNjRlN2EwNjdhMjhkYzA1OGJhYTJlNTVmZGI3MzRmMzczCjrNXg==: 00:15:12.926 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:15:12.926 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:12.926 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:12.926 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:12.926 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:12.926 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:12.926 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:12.926 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.926 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:12.926 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.926 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:12.926 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:12.926 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:12.926 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:12.926 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:12.926 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:12.926 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:12.926 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:12.926 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:12.926 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:12.926 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:12.926 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.926 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.926 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:13.185 nvme0n1 00:15:13.185 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.185 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:13.185 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:13.185 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.185 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:13.185 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.185 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.185 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:13.185 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.185 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:13.185 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.185 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:13.185 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:15:13.185 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:13.185 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:13.185 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:13.185 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:13.185 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODhkMGJhZDNjZGI0ZmIxYzg2NDJmNDEwMjUzNjdjNGbNjTNE: 00:15:13.185 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWE0ODUwMDk4NzE5NGRiOWI5NDlkZDBhNmU4OTZkYWJsq0QM: 00:15:13.185 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:13.185 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:13.186 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODhkMGJhZDNjZGI0ZmIxYzg2NDJmNDEwMjUzNjdjNGbNjTNE: 00:15:13.186 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWE0ODUwMDk4NzE5NGRiOWI5NDlkZDBhNmU4OTZkYWJsq0QM: ]] 00:15:13.186 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWE0ODUwMDk4NzE5NGRiOWI5NDlkZDBhNmU4OTZkYWJsq0QM: 00:15:13.186 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:15:13.186 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:13.186 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:13.186 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:13.186 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:13.186 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:13.186 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:13.186 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.186 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:13.186 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.186 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:13.186 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:13.186 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:13.186 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:13.186 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:13.186 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:13.186 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:13.186 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:13.186 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:13.186 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:13.186 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:13.186 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.186 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.186 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:13.446 nvme0n1 00:15:13.446 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.446 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:13.446 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:13.446 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.446 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:13.446 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.446 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.446 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:13.446 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.446 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:13.446 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.446 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:13.446 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:15:13.446 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:13.446 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:13.446 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:13.446 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:13.446 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjZmMWUxYTQ3YmZiZWE4NjM2NDI1ZjJhNzdlZDlmMTljNTMzNjMxMjg3NDVjYTA53JTdpw==: 00:15:13.446 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjIxY2Q2YTZiZGViZDI0ZDgyM2UzYzQ0MGViNTMxZWJvUMtB: 00:15:13.446 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:13.446 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:13.446 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjZmMWUxYTQ3YmZiZWE4NjM2NDI1ZjJhNzdlZDlmMTljNTMzNjMxMjg3NDVjYTA53JTdpw==: 00:15:13.446 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjIxY2Q2YTZiZGViZDI0ZDgyM2UzYzQ0MGViNTMxZWJvUMtB: ]] 00:15:13.446 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjIxY2Q2YTZiZGViZDI0ZDgyM2UzYzQ0MGViNTMxZWJvUMtB: 00:15:13.446 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:15:13.446 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:13.446 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:13.446 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:13.446 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:13.446 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:13.446 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:13.446 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.446 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:13.446 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.446 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:13.446 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:13.446 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:13.446 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:13.447 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:13.447 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:13.447 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:13.447 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:13.447 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:13.447 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:13.447 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:13.447 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:13.447 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.447 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:13.707 nvme0n1 00:15:13.707 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.707 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:13.707 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:13.707 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.707 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:13.707 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.707 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.707 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:13.707 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.707 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:13.707 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.707 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:13.707 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:15:13.707 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:13.707 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:13.707 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:13.707 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:13.707 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQxYTgwNWQ2MjAxOTNiOGYxMTkzODJlNDIwY2EwZmI0NWQ3NWE0ZDhkZGY1ZjU3ODQwMmY4MjIxNWE5YjYwZjfBZwo=: 00:15:13.707 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:13.707 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:13.707 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:13.707 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQxYTgwNWQ2MjAxOTNiOGYxMTkzODJlNDIwY2EwZmI0NWQ3NWE0ZDhkZGY1ZjU3ODQwMmY4MjIxNWE5YjYwZjfBZwo=: 00:15:13.707 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:13.707 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:15:13.707 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:13.707 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:13.707 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:13.707 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:13.707 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:13.707 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:13.707 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.707 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:13.707 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.707 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:13.707 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:13.707 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:13.707 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:13.707 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:13.707 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:13.707 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:13.707 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:13.707 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:13.707 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:13.707 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:13.707 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:13.707 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.707 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:13.707 nvme0n1 00:15:13.707 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.707 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:13.707 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:13.707 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.707 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:13.707 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.965 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.965 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:13.965 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.965 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:13.965 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.965 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:13.965 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:13.965 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:15:13.965 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:13.965 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:13.965 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:13.965 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:13.965 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjNiYTczNjJjNjIwODBjOGNkMzAxYWI2NWVmNDA5MTCU46W7: 00:15:13.965 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjI4ZjU1NTIwZTNkOTM0YTc2YTZhNzdjZDlkNjFkMjBlNDc0ZTg4YzgwM2M4MTQ5NDRkYzA5OTU2N2U4ZTUxNErXPwI=: 00:15:13.965 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:13.965 19:49:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjNiYTczNjJjNjIwODBjOGNkMzAxYWI2NWVmNDA5MTCU46W7: 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjI4ZjU1NTIwZTNkOTM0YTc2YTZhNzdjZDlkNjFkMjBlNDc0ZTg4YzgwM2M4MTQ5NDRkYzA5OTU2N2U4ZTUxNErXPwI=: ]] 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjI4ZjU1NTIwZTNkOTM0YTc2YTZhNzdjZDlkNjFkMjBlNDc0ZTg4YzgwM2M4MTQ5NDRkYzA5OTU2N2U4ZTUxNErXPwI=: 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:14.535 nvme0n1 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQzMWU1NzUxYzU1ZjFkOThiNTI5NTAxZjJlYzgwOWJiNTRjYWQ5NmE1NTJmZWMx4eq7yQ==: 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmQ2YzRlY2RiNWU1NTUyNjRlN2EwNjdhMjhkYzA1OGJhYTJlNTVmZGI3MzRmMzczCjrNXg==: 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQzMWU1NzUxYzU1ZjFkOThiNTI5NTAxZjJlYzgwOWJiNTRjYWQ5NmE1NTJmZWMx4eq7yQ==: 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmQ2YzRlY2RiNWU1NTUyNjRlN2EwNjdhMjhkYzA1OGJhYTJlNTVmZGI3MzRmMzczCjrNXg==: ]] 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmQ2YzRlY2RiNWU1NTUyNjRlN2EwNjdhMjhkYzA1OGJhYTJlNTVmZGI3MzRmMzczCjrNXg==: 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.535 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:14.801 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.801 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:14.801 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:14.801 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:14.801 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:14.801 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:14.801 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:14.801 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:14.801 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:14.801 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:14.801 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:14.801 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:14.801 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:14.801 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.801 19:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:15.063 nvme0n1 00:15:15.063 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.063 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:15.063 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:15.063 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.063 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:15.063 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.063 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:15.063 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:15.063 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.063 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:15.063 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.063 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:15.063 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:15:15.063 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:15.063 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:15.063 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:15.063 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:15.063 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODhkMGJhZDNjZGI0ZmIxYzg2NDJmNDEwMjUzNjdjNGbNjTNE: 00:15:15.063 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWE0ODUwMDk4NzE5NGRiOWI5NDlkZDBhNmU4OTZkYWJsq0QM: 00:15:15.063 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:15.063 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:15.063 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODhkMGJhZDNjZGI0ZmIxYzg2NDJmNDEwMjUzNjdjNGbNjTNE: 00:15:15.063 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWE0ODUwMDk4NzE5NGRiOWI5NDlkZDBhNmU4OTZkYWJsq0QM: ]] 00:15:15.063 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWE0ODUwMDk4NzE5NGRiOWI5NDlkZDBhNmU4OTZkYWJsq0QM: 00:15:15.063 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:15:15.063 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:15.063 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:15.063 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:15.063 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:15.063 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:15.063 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:15.063 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.063 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:15.063 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.063 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:15.063 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:15.063 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:15.063 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:15.063 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:15.063 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:15.063 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:15.063 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:15.063 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:15.063 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:15.063 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:15.063 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:15.063 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.063 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:15.325 nvme0n1 00:15:15.325 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.325 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:15.325 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:15.325 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.325 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:15.325 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.325 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:15.325 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:15.325 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.325 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:15.325 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.325 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:15.325 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:15:15.325 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:15.325 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:15.325 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:15.325 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:15.325 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjZmMWUxYTQ3YmZiZWE4NjM2NDI1ZjJhNzdlZDlmMTljNTMzNjMxMjg3NDVjYTA53JTdpw==: 00:15:15.325 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjIxY2Q2YTZiZGViZDI0ZDgyM2UzYzQ0MGViNTMxZWJvUMtB: 00:15:15.325 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:15.325 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:15.325 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjZmMWUxYTQ3YmZiZWE4NjM2NDI1ZjJhNzdlZDlmMTljNTMzNjMxMjg3NDVjYTA53JTdpw==: 00:15:15.325 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjIxY2Q2YTZiZGViZDI0ZDgyM2UzYzQ0MGViNTMxZWJvUMtB: ]] 00:15:15.325 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjIxY2Q2YTZiZGViZDI0ZDgyM2UzYzQ0MGViNTMxZWJvUMtB: 00:15:15.325 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:15:15.325 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:15.325 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:15.325 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:15.325 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:15.325 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:15.325 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:15.325 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.325 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:15.325 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.325 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:15.325 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:15.325 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:15.325 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:15.325 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:15.325 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:15.325 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:15.325 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:15.325 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:15.325 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:15.325 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:15.325 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:15.325 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.325 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:15.585 nvme0n1 00:15:15.585 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.586 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:15.586 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:15.586 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.586 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:15.586 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.586 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:15.586 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:15.586 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.586 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:15.586 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.586 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:15.586 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:15:15.586 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:15.586 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:15.586 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:15.586 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:15.586 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQxYTgwNWQ2MjAxOTNiOGYxMTkzODJlNDIwY2EwZmI0NWQ3NWE0ZDhkZGY1ZjU3ODQwMmY4MjIxNWE5YjYwZjfBZwo=: 00:15:15.586 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:15.586 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:15.586 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:15.586 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQxYTgwNWQ2MjAxOTNiOGYxMTkzODJlNDIwY2EwZmI0NWQ3NWE0ZDhkZGY1ZjU3ODQwMmY4MjIxNWE5YjYwZjfBZwo=: 00:15:15.586 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:15.586 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:15:15.586 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:15.586 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:15.586 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:15.586 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:15.586 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:15.586 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:15.586 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.586 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:15.586 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.586 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:15.586 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:15.586 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:15.586 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:15.586 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:15.586 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:15.586 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:15.586 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:15.586 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:15.586 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:15.586 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:15.586 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:15.586 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.586 19:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:15.845 nvme0n1 00:15:15.845 19:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.845 19:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:15.845 19:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:15.845 19:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.845 19:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:15.845 19:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.845 19:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:15.845 19:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:15.845 19:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.845 19:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:15.845 19:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.845 19:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:15.845 19:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:15.845 19:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:15:15.845 19:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:15.845 19:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:15.845 19:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:15.845 19:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:15.845 19:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjNiYTczNjJjNjIwODBjOGNkMzAxYWI2NWVmNDA5MTCU46W7: 00:15:15.845 19:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjI4ZjU1NTIwZTNkOTM0YTc2YTZhNzdjZDlkNjFkMjBlNDc0ZTg4YzgwM2M4MTQ5NDRkYzA5OTU2N2U4ZTUxNErXPwI=: 00:15:15.845 19:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:15.845 19:49:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:17.758 19:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjNiYTczNjJjNjIwODBjOGNkMzAxYWI2NWVmNDA5MTCU46W7: 00:15:17.758 19:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjI4ZjU1NTIwZTNkOTM0YTc2YTZhNzdjZDlkNjFkMjBlNDc0ZTg4YzgwM2M4MTQ5NDRkYzA5OTU2N2U4ZTUxNErXPwI=: ]] 00:15:17.758 19:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjI4ZjU1NTIwZTNkOTM0YTc2YTZhNzdjZDlkNjFkMjBlNDc0ZTg4YzgwM2M4MTQ5NDRkYzA5OTU2N2U4ZTUxNErXPwI=: 00:15:17.758 19:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:15:17.758 19:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:17.758 19:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:17.758 19:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:17.758 19:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:17.758 19:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:17.758 19:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:17.758 19:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.758 19:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:17.758 19:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.758 19:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:17.758 19:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:17.758 19:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:17.758 19:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:17.758 19:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:17.758 19:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:17.758 19:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:17.758 19:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:17.758 19:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:17.758 19:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:17.758 19:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:17.758 19:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.758 19:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.758 19:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:18.017 nvme0n1 00:15:18.017 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.017 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:18.017 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.017 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:18.017 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:18.017 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.017 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.017 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:18.017 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.017 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:18.017 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.017 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:18.017 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:15:18.017 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:18.017 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:18.017 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:18.017 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:18.017 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQzMWU1NzUxYzU1ZjFkOThiNTI5NTAxZjJlYzgwOWJiNTRjYWQ5NmE1NTJmZWMx4eq7yQ==: 00:15:18.017 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmQ2YzRlY2RiNWU1NTUyNjRlN2EwNjdhMjhkYzA1OGJhYTJlNTVmZGI3MzRmMzczCjrNXg==: 00:15:18.017 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:18.017 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:18.017 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQzMWU1NzUxYzU1ZjFkOThiNTI5NTAxZjJlYzgwOWJiNTRjYWQ5NmE1NTJmZWMx4eq7yQ==: 00:15:18.017 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmQ2YzRlY2RiNWU1NTUyNjRlN2EwNjdhMjhkYzA1OGJhYTJlNTVmZGI3MzRmMzczCjrNXg==: ]] 00:15:18.017 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmQ2YzRlY2RiNWU1NTUyNjRlN2EwNjdhMjhkYzA1OGJhYTJlNTVmZGI3MzRmMzczCjrNXg==: 00:15:18.017 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:15:18.017 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:18.017 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:18.017 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:18.017 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:18.017 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:18.017 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:18.017 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.017 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:18.017 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.017 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:18.017 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:18.017 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:18.017 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:18.017 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:18.017 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:18.017 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:18.017 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:18.017 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:18.017 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:18.017 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:18.017 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.017 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.017 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:18.586 nvme0n1 00:15:18.586 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.586 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:18.586 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.586 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:18.586 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:18.586 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.586 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.586 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:18.586 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.586 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:18.586 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.586 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:18.586 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:15:18.586 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:18.586 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:18.586 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:18.586 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:18.586 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODhkMGJhZDNjZGI0ZmIxYzg2NDJmNDEwMjUzNjdjNGbNjTNE: 00:15:18.586 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWE0ODUwMDk4NzE5NGRiOWI5NDlkZDBhNmU4OTZkYWJsq0QM: 00:15:18.586 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:18.586 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:18.586 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODhkMGJhZDNjZGI0ZmIxYzg2NDJmNDEwMjUzNjdjNGbNjTNE: 00:15:18.586 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWE0ODUwMDk4NzE5NGRiOWI5NDlkZDBhNmU4OTZkYWJsq0QM: ]] 00:15:18.586 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWE0ODUwMDk4NzE5NGRiOWI5NDlkZDBhNmU4OTZkYWJsq0QM: 00:15:18.586 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:15:18.586 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:18.586 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:18.586 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:18.586 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:18.586 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:18.586 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:18.586 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.586 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:18.586 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.586 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:18.586 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:18.586 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:18.586 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:18.586 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:18.586 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:18.586 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:18.586 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:18.586 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:18.586 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:18.586 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:18.586 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:18.586 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.586 19:49:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:19.204 nvme0n1 00:15:19.204 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.204 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:19.204 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:19.204 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.204 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:19.204 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.205 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.205 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:19.205 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.205 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:19.205 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.205 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:19.205 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:15:19.205 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:19.205 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:19.205 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:19.205 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:19.205 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjZmMWUxYTQ3YmZiZWE4NjM2NDI1ZjJhNzdlZDlmMTljNTMzNjMxMjg3NDVjYTA53JTdpw==: 00:15:19.205 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjIxY2Q2YTZiZGViZDI0ZDgyM2UzYzQ0MGViNTMxZWJvUMtB: 00:15:19.205 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:19.205 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:19.205 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjZmMWUxYTQ3YmZiZWE4NjM2NDI1ZjJhNzdlZDlmMTljNTMzNjMxMjg3NDVjYTA53JTdpw==: 00:15:19.205 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjIxY2Q2YTZiZGViZDI0ZDgyM2UzYzQ0MGViNTMxZWJvUMtB: ]] 00:15:19.205 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjIxY2Q2YTZiZGViZDI0ZDgyM2UzYzQ0MGViNTMxZWJvUMtB: 00:15:19.205 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:15:19.205 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:19.205 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:19.205 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:19.205 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:19.205 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:19.205 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:19.205 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.205 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:19.205 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.205 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:19.205 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:19.205 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:19.205 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:19.205 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:19.205 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:19.205 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:19.205 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:19.205 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:19.205 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:19.205 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:19.205 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:19.205 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.205 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:19.487 nvme0n1 00:15:19.487 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.487 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:19.487 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.487 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:19.487 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:19.487 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.487 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.487 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:19.487 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.487 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:19.487 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.487 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:19.487 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:15:19.487 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:19.487 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:19.487 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:19.487 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:19.487 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQxYTgwNWQ2MjAxOTNiOGYxMTkzODJlNDIwY2EwZmI0NWQ3NWE0ZDhkZGY1ZjU3ODQwMmY4MjIxNWE5YjYwZjfBZwo=: 00:15:19.487 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:19.487 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:19.487 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:19.487 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQxYTgwNWQ2MjAxOTNiOGYxMTkzODJlNDIwY2EwZmI0NWQ3NWE0ZDhkZGY1ZjU3ODQwMmY4MjIxNWE5YjYwZjfBZwo=: 00:15:19.487 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:19.487 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:15:19.487 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:19.487 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:19.487 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:19.487 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:19.487 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:19.487 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:19.487 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.487 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:19.487 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.487 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:19.487 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:19.487 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:19.487 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:19.487 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:19.487 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:19.487 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:19.487 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:19.487 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:19.487 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:19.487 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:19.487 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:19.487 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.487 19:49:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:20.077 nvme0n1 00:15:20.077 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.077 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:20.077 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.077 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:20.077 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:20.077 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.077 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.077 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:20.077 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.077 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:20.077 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.077 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:20.077 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:20.077 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:15:20.077 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:20.077 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:20.077 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:20.077 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:20.077 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjNiYTczNjJjNjIwODBjOGNkMzAxYWI2NWVmNDA5MTCU46W7: 00:15:20.077 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjI4ZjU1NTIwZTNkOTM0YTc2YTZhNzdjZDlkNjFkMjBlNDc0ZTg4YzgwM2M4MTQ5NDRkYzA5OTU2N2U4ZTUxNErXPwI=: 00:15:20.077 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:20.077 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:20.077 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjNiYTczNjJjNjIwODBjOGNkMzAxYWI2NWVmNDA5MTCU46W7: 00:15:20.077 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjI4ZjU1NTIwZTNkOTM0YTc2YTZhNzdjZDlkNjFkMjBlNDc0ZTg4YzgwM2M4MTQ5NDRkYzA5OTU2N2U4ZTUxNErXPwI=: ]] 00:15:20.077 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjI4ZjU1NTIwZTNkOTM0YTc2YTZhNzdjZDlkNjFkMjBlNDc0ZTg4YzgwM2M4MTQ5NDRkYzA5OTU2N2U4ZTUxNErXPwI=: 00:15:20.077 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:15:20.077 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:20.077 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:20.077 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:20.077 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:20.077 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:20.077 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:20.077 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.077 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:20.077 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.077 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:20.077 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:20.077 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:20.077 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:20.077 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:20.077 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:20.077 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:20.077 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:20.077 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:20.077 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:20.077 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:20.077 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:20.077 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.077 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:20.648 nvme0n1 00:15:20.648 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.648 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:20.648 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:20.648 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.648 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:20.648 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.909 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.909 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:20.909 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.909 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:20.909 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.909 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:20.909 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:15:20.909 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:20.909 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:20.909 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:20.909 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:20.909 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQzMWU1NzUxYzU1ZjFkOThiNTI5NTAxZjJlYzgwOWJiNTRjYWQ5NmE1NTJmZWMx4eq7yQ==: 00:15:20.909 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmQ2YzRlY2RiNWU1NTUyNjRlN2EwNjdhMjhkYzA1OGJhYTJlNTVmZGI3MzRmMzczCjrNXg==: 00:15:20.909 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:20.909 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:20.909 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQzMWU1NzUxYzU1ZjFkOThiNTI5NTAxZjJlYzgwOWJiNTRjYWQ5NmE1NTJmZWMx4eq7yQ==: 00:15:20.909 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmQ2YzRlY2RiNWU1NTUyNjRlN2EwNjdhMjhkYzA1OGJhYTJlNTVmZGI3MzRmMzczCjrNXg==: ]] 00:15:20.909 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmQ2YzRlY2RiNWU1NTUyNjRlN2EwNjdhMjhkYzA1OGJhYTJlNTVmZGI3MzRmMzczCjrNXg==: 00:15:20.909 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:15:20.909 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:20.909 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:20.909 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:20.909 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:20.909 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:20.909 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:20.909 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.909 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:20.909 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.909 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:20.909 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:20.909 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:20.909 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:20.909 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:20.909 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:20.909 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:20.909 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:20.909 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:20.909 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:20.909 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:20.909 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.909 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.909 19:49:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:21.479 nvme0n1 00:15:21.479 19:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.479 19:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:21.479 19:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:21.479 19:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.479 19:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:21.479 19:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.740 19:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.740 19:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:21.740 19:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.740 19:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:21.740 19:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.740 19:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:21.740 19:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:15:21.740 19:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:21.740 19:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:21.740 19:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:21.740 19:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:21.740 19:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODhkMGJhZDNjZGI0ZmIxYzg2NDJmNDEwMjUzNjdjNGbNjTNE: 00:15:21.740 19:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWE0ODUwMDk4NzE5NGRiOWI5NDlkZDBhNmU4OTZkYWJsq0QM: 00:15:21.740 19:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:21.740 19:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:21.740 19:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODhkMGJhZDNjZGI0ZmIxYzg2NDJmNDEwMjUzNjdjNGbNjTNE: 00:15:21.740 19:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWE0ODUwMDk4NzE5NGRiOWI5NDlkZDBhNmU4OTZkYWJsq0QM: ]] 00:15:21.740 19:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWE0ODUwMDk4NzE5NGRiOWI5NDlkZDBhNmU4OTZkYWJsq0QM: 00:15:21.740 19:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:15:21.740 19:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:21.740 19:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:21.740 19:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:21.740 19:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:21.740 19:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:21.740 19:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:21.740 19:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.740 19:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:21.740 19:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.740 19:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:21.740 19:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:21.740 19:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:21.740 19:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:21.740 19:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:21.740 19:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:21.740 19:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:21.740 19:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:21.740 19:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:21.740 19:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:21.740 19:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:21.740 19:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:21.740 19:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.740 19:49:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:22.309 nvme0n1 00:15:22.309 19:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.309 19:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:22.309 19:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:22.309 19:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.309 19:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:22.309 19:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.570 19:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.570 19:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:22.570 19:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.570 19:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:22.570 19:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.570 19:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:22.570 19:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:15:22.570 19:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:22.570 19:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:22.570 19:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:22.570 19:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:22.570 19:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjZmMWUxYTQ3YmZiZWE4NjM2NDI1ZjJhNzdlZDlmMTljNTMzNjMxMjg3NDVjYTA53JTdpw==: 00:15:22.570 19:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjIxY2Q2YTZiZGViZDI0ZDgyM2UzYzQ0MGViNTMxZWJvUMtB: 00:15:22.570 19:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:22.570 19:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:22.570 19:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjZmMWUxYTQ3YmZiZWE4NjM2NDI1ZjJhNzdlZDlmMTljNTMzNjMxMjg3NDVjYTA53JTdpw==: 00:15:22.570 19:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjIxY2Q2YTZiZGViZDI0ZDgyM2UzYzQ0MGViNTMxZWJvUMtB: ]] 00:15:22.570 19:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjIxY2Q2YTZiZGViZDI0ZDgyM2UzYzQ0MGViNTMxZWJvUMtB: 00:15:22.570 19:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:15:22.570 19:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:22.570 19:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:22.570 19:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:22.570 19:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:22.570 19:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:22.570 19:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:22.570 19:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.570 19:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:22.570 19:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.570 19:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:22.570 19:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:22.570 19:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:22.570 19:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:22.570 19:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:22.570 19:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:22.570 19:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:22.570 19:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:22.570 19:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:22.570 19:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:22.570 19:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:22.570 19:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:22.571 19:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.571 19:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:23.541 nvme0n1 00:15:23.541 19:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.541 19:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:23.541 19:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:23.541 19:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.541 19:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:23.541 19:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.541 19:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:23.541 19:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:23.541 19:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.541 19:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:23.541 19:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.541 19:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:23.541 19:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:15:23.541 19:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:23.541 19:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:23.541 19:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:23.541 19:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:23.541 19:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQxYTgwNWQ2MjAxOTNiOGYxMTkzODJlNDIwY2EwZmI0NWQ3NWE0ZDhkZGY1ZjU3ODQwMmY4MjIxNWE5YjYwZjfBZwo=: 00:15:23.541 19:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:23.541 19:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:23.541 19:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:23.541 19:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQxYTgwNWQ2MjAxOTNiOGYxMTkzODJlNDIwY2EwZmI0NWQ3NWE0ZDhkZGY1ZjU3ODQwMmY4MjIxNWE5YjYwZjfBZwo=: 00:15:23.541 19:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:23.541 19:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:15:23.541 19:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:23.541 19:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:15:23.541 19:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:23.541 19:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:23.541 19:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:23.541 19:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:23.541 19:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.541 19:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:23.541 19:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.541 19:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:23.541 19:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:23.541 19:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:23.541 19:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:23.541 19:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:23.541 19:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:23.541 19:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:23.541 19:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:23.541 19:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:23.541 19:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:23.541 19:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:23.541 19:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:23.541 19:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.541 19:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:24.119 nvme0n1 00:15:24.119 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.119 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:24.119 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.119 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:24.119 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:24.119 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.119 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.119 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:24.119 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.119 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:24.119 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.119 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:15:24.119 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:24.119 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:24.119 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:15:24.119 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:24.119 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:24.119 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:24.119 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:24.119 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjNiYTczNjJjNjIwODBjOGNkMzAxYWI2NWVmNDA5MTCU46W7: 00:15:24.119 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjI4ZjU1NTIwZTNkOTM0YTc2YTZhNzdjZDlkNjFkMjBlNDc0ZTg4YzgwM2M4MTQ5NDRkYzA5OTU2N2U4ZTUxNErXPwI=: 00:15:24.119 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:24.119 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:24.119 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjNiYTczNjJjNjIwODBjOGNkMzAxYWI2NWVmNDA5MTCU46W7: 00:15:24.119 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjI4ZjU1NTIwZTNkOTM0YTc2YTZhNzdjZDlkNjFkMjBlNDc0ZTg4YzgwM2M4MTQ5NDRkYzA5OTU2N2U4ZTUxNErXPwI=: ]] 00:15:24.119 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjI4ZjU1NTIwZTNkOTM0YTc2YTZhNzdjZDlkNjFkMjBlNDc0ZTg4YzgwM2M4MTQ5NDRkYzA5OTU2N2U4ZTUxNErXPwI=: 00:15:24.119 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:15:24.119 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:24.119 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:24.119 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:24.119 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:24.119 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:24.119 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:24.119 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.119 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:24.119 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.119 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:24.119 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:24.119 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:24.119 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:24.119 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:24.119 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:24.119 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:24.119 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:24.119 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:24.119 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:24.119 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:24.119 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.119 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.119 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:24.379 nvme0n1 00:15:24.379 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.379 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:24.379 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.379 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:24.379 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:24.379 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.379 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.379 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:24.379 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.379 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:24.379 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.379 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:24.379 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:15:24.379 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:24.379 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:24.379 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:24.379 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:24.379 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQzMWU1NzUxYzU1ZjFkOThiNTI5NTAxZjJlYzgwOWJiNTRjYWQ5NmE1NTJmZWMx4eq7yQ==: 00:15:24.379 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmQ2YzRlY2RiNWU1NTUyNjRlN2EwNjdhMjhkYzA1OGJhYTJlNTVmZGI3MzRmMzczCjrNXg==: 00:15:24.379 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:24.379 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:24.379 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQzMWU1NzUxYzU1ZjFkOThiNTI5NTAxZjJlYzgwOWJiNTRjYWQ5NmE1NTJmZWMx4eq7yQ==: 00:15:24.379 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmQ2YzRlY2RiNWU1NTUyNjRlN2EwNjdhMjhkYzA1OGJhYTJlNTVmZGI3MzRmMzczCjrNXg==: ]] 00:15:24.379 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmQ2YzRlY2RiNWU1NTUyNjRlN2EwNjdhMjhkYzA1OGJhYTJlNTVmZGI3MzRmMzczCjrNXg==: 00:15:24.379 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:15:24.379 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:24.379 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:24.379 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:24.379 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:24.379 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:24.379 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:24.379 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.379 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:24.379 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.379 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:24.379 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:24.379 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:24.379 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:24.379 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:24.380 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:24.380 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:24.380 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:24.380 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:24.380 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:24.380 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:24.380 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:24.380 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.380 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:24.640 nvme0n1 00:15:24.640 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.640 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:24.640 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:24.640 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.640 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:24.640 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.640 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.640 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:24.640 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.640 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:24.640 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.640 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:24.640 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:15:24.640 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:24.640 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:24.640 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:24.640 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:24.640 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODhkMGJhZDNjZGI0ZmIxYzg2NDJmNDEwMjUzNjdjNGbNjTNE: 00:15:24.640 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWE0ODUwMDk4NzE5NGRiOWI5NDlkZDBhNmU4OTZkYWJsq0QM: 00:15:24.640 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:24.640 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:24.640 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODhkMGJhZDNjZGI0ZmIxYzg2NDJmNDEwMjUzNjdjNGbNjTNE: 00:15:24.640 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWE0ODUwMDk4NzE5NGRiOWI5NDlkZDBhNmU4OTZkYWJsq0QM: ]] 00:15:24.640 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWE0ODUwMDk4NzE5NGRiOWI5NDlkZDBhNmU4OTZkYWJsq0QM: 00:15:24.640 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:15:24.640 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:24.640 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:24.640 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:24.640 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:24.640 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:24.640 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:24.640 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.640 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:24.640 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.640 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:24.640 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:24.640 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:24.640 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:24.641 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:24.641 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:24.641 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:24.641 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:24.641 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:24.641 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:24.641 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:24.641 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:24.641 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.641 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:24.641 nvme0n1 00:15:24.641 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.641 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:24.641 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:24.641 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.641 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:24.641 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.641 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.641 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:24.641 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.641 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:24.902 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.902 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:24.902 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:15:24.902 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:24.902 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:24.902 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:24.902 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:24.902 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjZmMWUxYTQ3YmZiZWE4NjM2NDI1ZjJhNzdlZDlmMTljNTMzNjMxMjg3NDVjYTA53JTdpw==: 00:15:24.902 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjIxY2Q2YTZiZGViZDI0ZDgyM2UzYzQ0MGViNTMxZWJvUMtB: 00:15:24.902 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:24.902 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:24.902 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjZmMWUxYTQ3YmZiZWE4NjM2NDI1ZjJhNzdlZDlmMTljNTMzNjMxMjg3NDVjYTA53JTdpw==: 00:15:24.902 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjIxY2Q2YTZiZGViZDI0ZDgyM2UzYzQ0MGViNTMxZWJvUMtB: ]] 00:15:24.902 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjIxY2Q2YTZiZGViZDI0ZDgyM2UzYzQ0MGViNTMxZWJvUMtB: 00:15:24.902 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:15:24.902 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:24.902 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:24.902 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:24.902 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:24.902 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:24.902 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:24.902 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.902 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:24.902 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.902 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:24.902 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:24.902 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:24.902 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:24.902 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:24.902 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:24.902 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:24.902 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:24.902 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:24.902 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:24.902 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:24.902 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:24.902 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.902 19:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:24.902 nvme0n1 00:15:24.902 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.902 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:24.902 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:24.902 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.902 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:24.902 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.902 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.902 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:24.902 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.902 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:24.902 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.902 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:24.902 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:15:24.902 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:24.902 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:24.902 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:24.902 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:24.902 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQxYTgwNWQ2MjAxOTNiOGYxMTkzODJlNDIwY2EwZmI0NWQ3NWE0ZDhkZGY1ZjU3ODQwMmY4MjIxNWE5YjYwZjfBZwo=: 00:15:24.902 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:24.902 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:24.902 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:24.902 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQxYTgwNWQ2MjAxOTNiOGYxMTkzODJlNDIwY2EwZmI0NWQ3NWE0ZDhkZGY1ZjU3ODQwMmY4MjIxNWE5YjYwZjfBZwo=: 00:15:24.902 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:24.902 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:15:24.902 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:24.902 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:24.902 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:24.902 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:24.902 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:24.902 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:24.902 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.902 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:24.902 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.902 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:24.902 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:24.902 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:24.902 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:24.902 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:24.902 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:24.902 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:24.902 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:24.902 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:24.902 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:24.902 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:24.902 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:24.902 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.902 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:25.168 nvme0n1 00:15:25.168 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.168 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:25.168 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:25.168 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.168 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:25.168 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.168 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.168 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:25.168 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.168 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:25.168 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.168 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:25.168 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:25.168 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:15:25.168 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:25.168 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:25.168 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:25.168 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:25.168 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjNiYTczNjJjNjIwODBjOGNkMzAxYWI2NWVmNDA5MTCU46W7: 00:15:25.168 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjI4ZjU1NTIwZTNkOTM0YTc2YTZhNzdjZDlkNjFkMjBlNDc0ZTg4YzgwM2M4MTQ5NDRkYzA5OTU2N2U4ZTUxNErXPwI=: 00:15:25.168 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:25.168 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:25.168 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjNiYTczNjJjNjIwODBjOGNkMzAxYWI2NWVmNDA5MTCU46W7: 00:15:25.168 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjI4ZjU1NTIwZTNkOTM0YTc2YTZhNzdjZDlkNjFkMjBlNDc0ZTg4YzgwM2M4MTQ5NDRkYzA5OTU2N2U4ZTUxNErXPwI=: ]] 00:15:25.168 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjI4ZjU1NTIwZTNkOTM0YTc2YTZhNzdjZDlkNjFkMjBlNDc0ZTg4YzgwM2M4MTQ5NDRkYzA5OTU2N2U4ZTUxNErXPwI=: 00:15:25.168 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:15:25.168 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:25.168 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:25.168 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:25.168 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:25.168 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:25.168 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:25.168 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.168 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:25.168 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.168 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:25.168 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:25.168 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:25.168 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:25.168 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:25.168 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:25.168 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:25.168 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:25.168 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:25.168 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:25.168 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:25.168 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:25.168 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.168 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:25.168 nvme0n1 00:15:25.168 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.168 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:25.168 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.168 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:25.168 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:25.168 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.429 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.429 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:25.429 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.429 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQzMWU1NzUxYzU1ZjFkOThiNTI5NTAxZjJlYzgwOWJiNTRjYWQ5NmE1NTJmZWMx4eq7yQ==: 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmQ2YzRlY2RiNWU1NTUyNjRlN2EwNjdhMjhkYzA1OGJhYTJlNTVmZGI3MzRmMzczCjrNXg==: 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQzMWU1NzUxYzU1ZjFkOThiNTI5NTAxZjJlYzgwOWJiNTRjYWQ5NmE1NTJmZWMx4eq7yQ==: 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmQ2YzRlY2RiNWU1NTUyNjRlN2EwNjdhMjhkYzA1OGJhYTJlNTVmZGI3MzRmMzczCjrNXg==: ]] 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmQ2YzRlY2RiNWU1NTUyNjRlN2EwNjdhMjhkYzA1OGJhYTJlNTVmZGI3MzRmMzczCjrNXg==: 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:25.430 nvme0n1 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODhkMGJhZDNjZGI0ZmIxYzg2NDJmNDEwMjUzNjdjNGbNjTNE: 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWE0ODUwMDk4NzE5NGRiOWI5NDlkZDBhNmU4OTZkYWJsq0QM: 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODhkMGJhZDNjZGI0ZmIxYzg2NDJmNDEwMjUzNjdjNGbNjTNE: 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWE0ODUwMDk4NzE5NGRiOWI5NDlkZDBhNmU4OTZkYWJsq0QM: ]] 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWE0ODUwMDk4NzE5NGRiOWI5NDlkZDBhNmU4OTZkYWJsq0QM: 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.430 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:25.691 nvme0n1 00:15:25.691 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.691 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:25.691 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:25.691 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.691 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:25.691 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.691 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.691 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:25.691 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.691 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:25.691 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.691 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:25.691 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:15:25.691 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:25.691 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:25.691 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:25.691 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:25.691 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjZmMWUxYTQ3YmZiZWE4NjM2NDI1ZjJhNzdlZDlmMTljNTMzNjMxMjg3NDVjYTA53JTdpw==: 00:15:25.691 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjIxY2Q2YTZiZGViZDI0ZDgyM2UzYzQ0MGViNTMxZWJvUMtB: 00:15:25.691 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:25.691 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:25.691 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjZmMWUxYTQ3YmZiZWE4NjM2NDI1ZjJhNzdlZDlmMTljNTMzNjMxMjg3NDVjYTA53JTdpw==: 00:15:25.691 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjIxY2Q2YTZiZGViZDI0ZDgyM2UzYzQ0MGViNTMxZWJvUMtB: ]] 00:15:25.691 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjIxY2Q2YTZiZGViZDI0ZDgyM2UzYzQ0MGViNTMxZWJvUMtB: 00:15:25.691 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:15:25.691 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:25.691 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:25.691 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:25.691 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:25.691 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:25.691 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:25.691 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.691 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:25.691 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.692 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:25.692 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:25.692 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:25.692 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:25.692 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:25.692 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:25.692 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:25.692 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:25.692 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:25.692 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:25.692 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:25.692 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:25.692 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.692 19:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:25.952 nvme0n1 00:15:25.952 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.952 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:25.952 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:25.952 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.952 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:25.952 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.952 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.952 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:25.952 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.952 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:25.952 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.952 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:25.952 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:15:25.952 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:25.952 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:25.952 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:25.952 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:25.952 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQxYTgwNWQ2MjAxOTNiOGYxMTkzODJlNDIwY2EwZmI0NWQ3NWE0ZDhkZGY1ZjU3ODQwMmY4MjIxNWE5YjYwZjfBZwo=: 00:15:25.952 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:25.952 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:25.952 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:25.952 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQxYTgwNWQ2MjAxOTNiOGYxMTkzODJlNDIwY2EwZmI0NWQ3NWE0ZDhkZGY1ZjU3ODQwMmY4MjIxNWE5YjYwZjfBZwo=: 00:15:25.952 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:25.952 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:15:25.952 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:25.952 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:25.952 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:25.952 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:25.952 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:25.952 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:25.952 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.952 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:25.952 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.952 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:25.952 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:25.952 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:25.952 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:25.952 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:25.952 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:25.952 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:25.952 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:25.952 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:25.952 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:25.952 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:25.952 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:25.952 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.952 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:26.213 nvme0n1 00:15:26.213 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.213 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:26.213 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.213 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:26.213 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:26.213 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.213 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.213 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:26.213 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.213 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:26.213 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.213 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:26.213 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:26.213 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:15:26.213 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:26.213 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:26.213 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:26.213 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:26.213 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjNiYTczNjJjNjIwODBjOGNkMzAxYWI2NWVmNDA5MTCU46W7: 00:15:26.213 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjI4ZjU1NTIwZTNkOTM0YTc2YTZhNzdjZDlkNjFkMjBlNDc0ZTg4YzgwM2M4MTQ5NDRkYzA5OTU2N2U4ZTUxNErXPwI=: 00:15:26.213 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:26.213 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:26.213 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjNiYTczNjJjNjIwODBjOGNkMzAxYWI2NWVmNDA5MTCU46W7: 00:15:26.213 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjI4ZjU1NTIwZTNkOTM0YTc2YTZhNzdjZDlkNjFkMjBlNDc0ZTg4YzgwM2M4MTQ5NDRkYzA5OTU2N2U4ZTUxNErXPwI=: ]] 00:15:26.213 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjI4ZjU1NTIwZTNkOTM0YTc2YTZhNzdjZDlkNjFkMjBlNDc0ZTg4YzgwM2M4MTQ5NDRkYzA5OTU2N2U4ZTUxNErXPwI=: 00:15:26.213 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:15:26.213 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:26.213 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:26.213 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:26.213 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:26.213 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:26.213 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:26.213 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.213 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:26.213 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.213 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:26.213 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:26.213 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:26.213 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:26.213 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:26.213 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:26.213 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:26.213 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:26.213 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:26.213 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:26.213 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:26.213 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.213 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.213 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:26.474 nvme0n1 00:15:26.474 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.474 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:26.474 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:26.474 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.474 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:26.474 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.474 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.474 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:26.474 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.474 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:26.474 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.474 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:26.474 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:15:26.474 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:26.474 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:26.474 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:26.474 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:26.474 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQzMWU1NzUxYzU1ZjFkOThiNTI5NTAxZjJlYzgwOWJiNTRjYWQ5NmE1NTJmZWMx4eq7yQ==: 00:15:26.474 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmQ2YzRlY2RiNWU1NTUyNjRlN2EwNjdhMjhkYzA1OGJhYTJlNTVmZGI3MzRmMzczCjrNXg==: 00:15:26.474 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:26.474 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:26.474 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQzMWU1NzUxYzU1ZjFkOThiNTI5NTAxZjJlYzgwOWJiNTRjYWQ5NmE1NTJmZWMx4eq7yQ==: 00:15:26.474 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmQ2YzRlY2RiNWU1NTUyNjRlN2EwNjdhMjhkYzA1OGJhYTJlNTVmZGI3MzRmMzczCjrNXg==: ]] 00:15:26.474 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmQ2YzRlY2RiNWU1NTUyNjRlN2EwNjdhMjhkYzA1OGJhYTJlNTVmZGI3MzRmMzczCjrNXg==: 00:15:26.474 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:15:26.474 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:26.474 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:26.474 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:26.474 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:26.474 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:26.474 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:26.474 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.474 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:26.474 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.474 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:26.474 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:26.474 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:26.474 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:26.474 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:26.474 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:26.474 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:26.474 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:26.474 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:26.474 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:26.474 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:26.474 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.474 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.474 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:26.735 nvme0n1 00:15:26.735 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.735 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:26.735 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:26.735 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.735 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:26.735 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.735 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.735 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:26.735 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.735 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:26.735 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.735 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:26.735 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:15:26.735 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:26.735 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:26.735 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:26.735 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:26.735 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODhkMGJhZDNjZGI0ZmIxYzg2NDJmNDEwMjUzNjdjNGbNjTNE: 00:15:26.735 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWE0ODUwMDk4NzE5NGRiOWI5NDlkZDBhNmU4OTZkYWJsq0QM: 00:15:26.735 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:26.735 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:26.735 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODhkMGJhZDNjZGI0ZmIxYzg2NDJmNDEwMjUzNjdjNGbNjTNE: 00:15:26.735 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWE0ODUwMDk4NzE5NGRiOWI5NDlkZDBhNmU4OTZkYWJsq0QM: ]] 00:15:26.735 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWE0ODUwMDk4NzE5NGRiOWI5NDlkZDBhNmU4OTZkYWJsq0QM: 00:15:26.735 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:15:26.735 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:26.735 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:26.735 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:26.735 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:26.735 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:26.735 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:26.735 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.735 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:26.735 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.735 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:26.735 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:26.735 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:26.735 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:26.735 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:26.735 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:26.735 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:26.735 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:26.735 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:26.735 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:26.735 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:26.736 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:26.736 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.736 19:49:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:26.997 nvme0n1 00:15:26.997 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.997 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:26.997 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.997 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:26.997 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:26.997 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.997 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.997 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:26.997 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.997 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:26.997 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.997 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:26.997 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:15:26.997 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:26.997 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:26.997 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:26.997 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:26.997 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjZmMWUxYTQ3YmZiZWE4NjM2NDI1ZjJhNzdlZDlmMTljNTMzNjMxMjg3NDVjYTA53JTdpw==: 00:15:26.997 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjIxY2Q2YTZiZGViZDI0ZDgyM2UzYzQ0MGViNTMxZWJvUMtB: 00:15:26.997 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:26.997 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:26.997 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjZmMWUxYTQ3YmZiZWE4NjM2NDI1ZjJhNzdlZDlmMTljNTMzNjMxMjg3NDVjYTA53JTdpw==: 00:15:26.997 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjIxY2Q2YTZiZGViZDI0ZDgyM2UzYzQ0MGViNTMxZWJvUMtB: ]] 00:15:26.997 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjIxY2Q2YTZiZGViZDI0ZDgyM2UzYzQ0MGViNTMxZWJvUMtB: 00:15:26.997 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:15:26.997 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:26.997 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:26.997 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:26.997 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:26.997 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:26.997 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:26.997 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.997 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:26.997 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.997 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:26.997 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:26.997 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:26.997 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:26.997 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:26.997 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:26.997 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:26.997 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:26.997 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:26.997 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:26.997 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:26.997 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:26.997 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.997 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:27.258 nvme0n1 00:15:27.258 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.258 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:27.258 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:27.258 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.258 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:27.258 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.542 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.542 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:27.542 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.542 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:27.542 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.542 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:27.542 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:15:27.542 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:27.542 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:27.543 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:27.543 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:27.543 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQxYTgwNWQ2MjAxOTNiOGYxMTkzODJlNDIwY2EwZmI0NWQ3NWE0ZDhkZGY1ZjU3ODQwMmY4MjIxNWE5YjYwZjfBZwo=: 00:15:27.543 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:27.543 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:27.543 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:27.543 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQxYTgwNWQ2MjAxOTNiOGYxMTkzODJlNDIwY2EwZmI0NWQ3NWE0ZDhkZGY1ZjU3ODQwMmY4MjIxNWE5YjYwZjfBZwo=: 00:15:27.543 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:27.543 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:15:27.543 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:27.543 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:27.543 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:27.543 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:27.543 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:27.543 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:27.543 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.543 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:27.543 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.543 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:27.543 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:27.543 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:27.543 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:27.543 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:27.543 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:27.543 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:27.543 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:27.543 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:27.543 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:27.543 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:27.543 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:27.543 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.543 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:27.829 nvme0n1 00:15:27.829 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.829 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:27.829 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:27.829 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.829 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:27.830 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.830 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.830 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:27.830 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.830 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:27.830 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.830 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:27.830 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:27.830 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:15:27.830 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:27.830 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:27.830 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:27.830 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:27.830 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjNiYTczNjJjNjIwODBjOGNkMzAxYWI2NWVmNDA5MTCU46W7: 00:15:27.830 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjI4ZjU1NTIwZTNkOTM0YTc2YTZhNzdjZDlkNjFkMjBlNDc0ZTg4YzgwM2M4MTQ5NDRkYzA5OTU2N2U4ZTUxNErXPwI=: 00:15:27.830 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:27.830 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:27.830 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjNiYTczNjJjNjIwODBjOGNkMzAxYWI2NWVmNDA5MTCU46W7: 00:15:27.830 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjI4ZjU1NTIwZTNkOTM0YTc2YTZhNzdjZDlkNjFkMjBlNDc0ZTg4YzgwM2M4MTQ5NDRkYzA5OTU2N2U4ZTUxNErXPwI=: ]] 00:15:27.830 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjI4ZjU1NTIwZTNkOTM0YTc2YTZhNzdjZDlkNjFkMjBlNDc0ZTg4YzgwM2M4MTQ5NDRkYzA5OTU2N2U4ZTUxNErXPwI=: 00:15:27.830 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:15:27.830 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:27.830 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:27.830 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:27.830 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:27.830 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:27.830 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:27.830 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.830 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:27.830 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.830 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:27.830 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:27.830 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:27.830 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:27.830 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:27.830 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:27.830 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:27.830 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:27.830 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:27.830 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:27.830 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:27.830 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:27.830 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.830 19:49:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:28.402 nvme0n1 00:15:28.402 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.402 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:28.402 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:28.402 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.402 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:28.402 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.402 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.402 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:28.402 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.402 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:28.402 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.402 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:28.402 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:15:28.402 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:28.402 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:28.402 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:28.402 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:28.402 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQzMWU1NzUxYzU1ZjFkOThiNTI5NTAxZjJlYzgwOWJiNTRjYWQ5NmE1NTJmZWMx4eq7yQ==: 00:15:28.402 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmQ2YzRlY2RiNWU1NTUyNjRlN2EwNjdhMjhkYzA1OGJhYTJlNTVmZGI3MzRmMzczCjrNXg==: 00:15:28.402 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:28.402 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:28.402 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQzMWU1NzUxYzU1ZjFkOThiNTI5NTAxZjJlYzgwOWJiNTRjYWQ5NmE1NTJmZWMx4eq7yQ==: 00:15:28.402 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmQ2YzRlY2RiNWU1NTUyNjRlN2EwNjdhMjhkYzA1OGJhYTJlNTVmZGI3MzRmMzczCjrNXg==: ]] 00:15:28.402 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmQ2YzRlY2RiNWU1NTUyNjRlN2EwNjdhMjhkYzA1OGJhYTJlNTVmZGI3MzRmMzczCjrNXg==: 00:15:28.402 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:15:28.402 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:28.402 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:28.402 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:28.402 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:28.402 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:28.402 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:28.402 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.402 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:28.402 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.402 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:28.402 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:28.402 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:28.402 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:28.402 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:28.402 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:28.402 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:28.402 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:28.402 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:28.402 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:28.402 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:28.402 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.402 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.402 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:28.974 nvme0n1 00:15:28.974 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.974 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:28.974 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:28.974 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.974 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:28.974 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.974 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.974 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:28.974 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.974 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:28.974 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.974 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:28.974 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:15:28.974 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:28.974 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:28.974 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:28.974 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:28.974 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODhkMGJhZDNjZGI0ZmIxYzg2NDJmNDEwMjUzNjdjNGbNjTNE: 00:15:28.974 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWE0ODUwMDk4NzE5NGRiOWI5NDlkZDBhNmU4OTZkYWJsq0QM: 00:15:28.974 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:28.974 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:28.974 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODhkMGJhZDNjZGI0ZmIxYzg2NDJmNDEwMjUzNjdjNGbNjTNE: 00:15:28.974 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWE0ODUwMDk4NzE5NGRiOWI5NDlkZDBhNmU4OTZkYWJsq0QM: ]] 00:15:28.974 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWE0ODUwMDk4NzE5NGRiOWI5NDlkZDBhNmU4OTZkYWJsq0QM: 00:15:28.974 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:15:28.974 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:28.974 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:28.974 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:28.974 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:28.974 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:28.974 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:28.974 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.974 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:28.974 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.974 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:28.975 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:28.975 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:28.975 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:28.975 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:28.975 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:28.975 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:28.975 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:28.975 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:28.975 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:28.975 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:28.975 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:28.975 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.975 19:49:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:29.548 nvme0n1 00:15:29.548 19:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.548 19:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:29.548 19:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.548 19:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:29.548 19:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:29.548 19:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.548 19:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.548 19:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:29.548 19:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.548 19:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:29.548 19:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.548 19:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:29.548 19:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:15:29.548 19:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:29.548 19:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:29.548 19:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:29.548 19:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:29.548 19:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjZmMWUxYTQ3YmZiZWE4NjM2NDI1ZjJhNzdlZDlmMTljNTMzNjMxMjg3NDVjYTA53JTdpw==: 00:15:29.548 19:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjIxY2Q2YTZiZGViZDI0ZDgyM2UzYzQ0MGViNTMxZWJvUMtB: 00:15:29.548 19:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:29.548 19:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:29.548 19:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjZmMWUxYTQ3YmZiZWE4NjM2NDI1ZjJhNzdlZDlmMTljNTMzNjMxMjg3NDVjYTA53JTdpw==: 00:15:29.548 19:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjIxY2Q2YTZiZGViZDI0ZDgyM2UzYzQ0MGViNTMxZWJvUMtB: ]] 00:15:29.548 19:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjIxY2Q2YTZiZGViZDI0ZDgyM2UzYzQ0MGViNTMxZWJvUMtB: 00:15:29.548 19:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:15:29.548 19:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:29.548 19:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:29.548 19:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:29.548 19:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:29.548 19:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:29.548 19:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:29.548 19:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.548 19:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:29.548 19:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.548 19:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:29.548 19:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:29.548 19:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:29.548 19:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:29.548 19:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:29.548 19:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:29.548 19:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:29.548 19:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:29.548 19:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:29.548 19:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:29.548 19:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:29.548 19:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:29.548 19:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.548 19:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:29.809 nvme0n1 00:15:29.809 19:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.809 19:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:29.809 19:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:29.809 19:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.809 19:49:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:29.809 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.809 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.809 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:29.809 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.809 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:29.809 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.809 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:29.809 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:15:29.809 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:29.809 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:29.809 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:29.809 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:29.809 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQxYTgwNWQ2MjAxOTNiOGYxMTkzODJlNDIwY2EwZmI0NWQ3NWE0ZDhkZGY1ZjU3ODQwMmY4MjIxNWE5YjYwZjfBZwo=: 00:15:29.809 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:29.809 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:29.809 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:29.809 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQxYTgwNWQ2MjAxOTNiOGYxMTkzODJlNDIwY2EwZmI0NWQ3NWE0ZDhkZGY1ZjU3ODQwMmY4MjIxNWE5YjYwZjfBZwo=: 00:15:29.809 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:29.809 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:15:29.809 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:29.809 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:29.809 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:29.809 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:29.809 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:29.809 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:29.809 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.809 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:29.809 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.809 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:29.809 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:30.069 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:30.069 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:30.069 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:30.069 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:30.069 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:30.069 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:30.069 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:30.069 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:30.069 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:30.070 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:30.070 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.070 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:30.330 nvme0n1 00:15:30.330 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.330 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:30.330 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.330 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:30.330 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:30.330 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.330 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.330 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:30.330 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.330 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:30.330 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.330 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:30.330 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:30.330 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:15:30.331 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:30.331 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:30.331 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:30.331 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:30.331 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjNiYTczNjJjNjIwODBjOGNkMzAxYWI2NWVmNDA5MTCU46W7: 00:15:30.331 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjI4ZjU1NTIwZTNkOTM0YTc2YTZhNzdjZDlkNjFkMjBlNDc0ZTg4YzgwM2M4MTQ5NDRkYzA5OTU2N2U4ZTUxNErXPwI=: 00:15:30.331 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:30.331 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:30.331 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjNiYTczNjJjNjIwODBjOGNkMzAxYWI2NWVmNDA5MTCU46W7: 00:15:30.331 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjI4ZjU1NTIwZTNkOTM0YTc2YTZhNzdjZDlkNjFkMjBlNDc0ZTg4YzgwM2M4MTQ5NDRkYzA5OTU2N2U4ZTUxNErXPwI=: ]] 00:15:30.331 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjI4ZjU1NTIwZTNkOTM0YTc2YTZhNzdjZDlkNjFkMjBlNDc0ZTg4YzgwM2M4MTQ5NDRkYzA5OTU2N2U4ZTUxNErXPwI=: 00:15:30.331 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:15:30.331 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:30.331 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:30.331 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:30.331 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:30.331 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:30.331 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:30.592 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.592 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:30.592 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.592 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:30.592 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:30.592 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:30.592 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:30.592 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:30.592 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:30.592 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:30.592 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:30.592 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:30.592 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:30.592 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:30.592 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:30.592 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.592 19:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:31.163 nvme0n1 00:15:31.163 19:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.163 19:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:31.163 19:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:31.163 19:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.163 19:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:31.163 19:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.163 19:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.163 19:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:31.163 19:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.163 19:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:31.476 19:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.476 19:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:31.476 19:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:15:31.476 19:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:31.476 19:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:31.476 19:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:31.476 19:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:31.476 19:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQzMWU1NzUxYzU1ZjFkOThiNTI5NTAxZjJlYzgwOWJiNTRjYWQ5NmE1NTJmZWMx4eq7yQ==: 00:15:31.476 19:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmQ2YzRlY2RiNWU1NTUyNjRlN2EwNjdhMjhkYzA1OGJhYTJlNTVmZGI3MzRmMzczCjrNXg==: 00:15:31.476 19:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:31.476 19:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:31.476 19:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQzMWU1NzUxYzU1ZjFkOThiNTI5NTAxZjJlYzgwOWJiNTRjYWQ5NmE1NTJmZWMx4eq7yQ==: 00:15:31.476 19:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmQ2YzRlY2RiNWU1NTUyNjRlN2EwNjdhMjhkYzA1OGJhYTJlNTVmZGI3MzRmMzczCjrNXg==: ]] 00:15:31.476 19:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmQ2YzRlY2RiNWU1NTUyNjRlN2EwNjdhMjhkYzA1OGJhYTJlNTVmZGI3MzRmMzczCjrNXg==: 00:15:31.476 19:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:15:31.476 19:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:31.476 19:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:31.476 19:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:31.476 19:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:31.476 19:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:31.476 19:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:31.476 19:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.476 19:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:31.476 19:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.476 19:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:31.476 19:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:31.476 19:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:31.476 19:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:31.476 19:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:31.476 19:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:31.476 19:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:31.476 19:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:31.476 19:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:31.476 19:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:31.476 19:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:31.476 19:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:31.476 19:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.476 19:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:32.047 nvme0n1 00:15:32.047 19:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.047 19:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:32.047 19:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.047 19:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:32.047 19:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:32.047 19:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.047 19:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.047 19:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:32.047 19:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.047 19:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:32.047 19:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.047 19:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:32.047 19:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:15:32.047 19:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:32.047 19:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:32.047 19:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:32.047 19:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:32.047 19:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODhkMGJhZDNjZGI0ZmIxYzg2NDJmNDEwMjUzNjdjNGbNjTNE: 00:15:32.047 19:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWE0ODUwMDk4NzE5NGRiOWI5NDlkZDBhNmU4OTZkYWJsq0QM: 00:15:32.047 19:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:32.047 19:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:32.047 19:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODhkMGJhZDNjZGI0ZmIxYzg2NDJmNDEwMjUzNjdjNGbNjTNE: 00:15:32.047 19:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWE0ODUwMDk4NzE5NGRiOWI5NDlkZDBhNmU4OTZkYWJsq0QM: ]] 00:15:32.047 19:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWE0ODUwMDk4NzE5NGRiOWI5NDlkZDBhNmU4OTZkYWJsq0QM: 00:15:32.047 19:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:15:32.047 19:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:32.047 19:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:32.047 19:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:32.047 19:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:32.047 19:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:32.047 19:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:32.047 19:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.047 19:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:32.047 19:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.047 19:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:32.047 19:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:32.047 19:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:32.047 19:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:32.047 19:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:32.047 19:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:32.047 19:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:32.047 19:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:32.047 19:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:32.047 19:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:32.047 19:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:32.047 19:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:32.047 19:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.047 19:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:32.990 nvme0n1 00:15:32.990 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.990 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:32.990 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:32.990 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.990 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:32.990 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.990 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.991 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:32.991 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.991 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:32.991 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.991 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:32.991 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:15:32.991 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:32.991 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:32.991 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:32.991 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:32.991 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjZmMWUxYTQ3YmZiZWE4NjM2NDI1ZjJhNzdlZDlmMTljNTMzNjMxMjg3NDVjYTA53JTdpw==: 00:15:32.991 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjIxY2Q2YTZiZGViZDI0ZDgyM2UzYzQ0MGViNTMxZWJvUMtB: 00:15:32.991 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:32.991 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:32.991 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjZmMWUxYTQ3YmZiZWE4NjM2NDI1ZjJhNzdlZDlmMTljNTMzNjMxMjg3NDVjYTA53JTdpw==: 00:15:32.991 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjIxY2Q2YTZiZGViZDI0ZDgyM2UzYzQ0MGViNTMxZWJvUMtB: ]] 00:15:32.991 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjIxY2Q2YTZiZGViZDI0ZDgyM2UzYzQ0MGViNTMxZWJvUMtB: 00:15:32.991 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:15:32.991 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:32.991 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:32.991 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:32.991 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:32.991 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:32.991 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:32.991 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.991 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:32.991 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.991 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:32.991 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:32.991 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:32.991 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:32.991 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:32.991 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:32.991 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:32.991 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:32.991 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:32.991 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:32.991 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:32.991 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:32.991 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.991 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:33.562 nvme0n1 00:15:33.562 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.562 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:33.562 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.562 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:33.562 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:33.822 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.822 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.822 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:33.822 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.822 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:33.822 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.822 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:33.822 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:15:33.822 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:33.822 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:15:33.822 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:33.822 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:33.822 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQxYTgwNWQ2MjAxOTNiOGYxMTkzODJlNDIwY2EwZmI0NWQ3NWE0ZDhkZGY1ZjU3ODQwMmY4MjIxNWE5YjYwZjfBZwo=: 00:15:33.822 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:33.823 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:15:33.823 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:33.823 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQxYTgwNWQ2MjAxOTNiOGYxMTkzODJlNDIwY2EwZmI0NWQ3NWE0ZDhkZGY1ZjU3ODQwMmY4MjIxNWE5YjYwZjfBZwo=: 00:15:33.823 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:33.823 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:15:33.823 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:33.823 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:15:33.823 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:33.823 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:33.823 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:33.823 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:33.823 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.823 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:33.823 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.823 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:33.823 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:33.823 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:33.823 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:33.823 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:33.823 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:33.823 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:33.823 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:33.823 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:33.823 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:33.823 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:33.823 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:33.823 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.823 19:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:34.395 nvme0n1 00:15:34.395 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.395 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:34.395 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:34.395 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.395 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:34.395 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.395 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.395 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:34.395 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.395 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjNiYTczNjJjNjIwODBjOGNkMzAxYWI2NWVmNDA5MTCU46W7: 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjI4ZjU1NTIwZTNkOTM0YTc2YTZhNzdjZDlkNjFkMjBlNDc0ZTg4YzgwM2M4MTQ5NDRkYzA5OTU2N2U4ZTUxNErXPwI=: 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjNiYTczNjJjNjIwODBjOGNkMzAxYWI2NWVmNDA5MTCU46W7: 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjI4ZjU1NTIwZTNkOTM0YTc2YTZhNzdjZDlkNjFkMjBlNDc0ZTg4YzgwM2M4MTQ5NDRkYzA5OTU2N2U4ZTUxNErXPwI=: ]] 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjI4ZjU1NTIwZTNkOTM0YTc2YTZhNzdjZDlkNjFkMjBlNDc0ZTg4YzgwM2M4MTQ5NDRkYzA5OTU2N2U4ZTUxNErXPwI=: 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:34.656 nvme0n1 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQzMWU1NzUxYzU1ZjFkOThiNTI5NTAxZjJlYzgwOWJiNTRjYWQ5NmE1NTJmZWMx4eq7yQ==: 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmQ2YzRlY2RiNWU1NTUyNjRlN2EwNjdhMjhkYzA1OGJhYTJlNTVmZGI3MzRmMzczCjrNXg==: 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQzMWU1NzUxYzU1ZjFkOThiNTI5NTAxZjJlYzgwOWJiNTRjYWQ5NmE1NTJmZWMx4eq7yQ==: 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmQ2YzRlY2RiNWU1NTUyNjRlN2EwNjdhMjhkYzA1OGJhYTJlNTVmZGI3MzRmMzczCjrNXg==: ]] 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmQ2YzRlY2RiNWU1NTUyNjRlN2EwNjdhMjhkYzA1OGJhYTJlNTVmZGI3MzRmMzczCjrNXg==: 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.656 19:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:34.918 nvme0n1 00:15:34.918 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.918 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:34.918 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:34.918 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.918 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:34.918 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.918 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.918 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:34.918 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.918 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:34.918 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.918 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:34.918 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:15:34.918 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:34.918 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:34.918 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:34.918 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:34.918 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODhkMGJhZDNjZGI0ZmIxYzg2NDJmNDEwMjUzNjdjNGbNjTNE: 00:15:34.918 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWE0ODUwMDk4NzE5NGRiOWI5NDlkZDBhNmU4OTZkYWJsq0QM: 00:15:34.918 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:34.918 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:34.918 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODhkMGJhZDNjZGI0ZmIxYzg2NDJmNDEwMjUzNjdjNGbNjTNE: 00:15:34.918 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWE0ODUwMDk4NzE5NGRiOWI5NDlkZDBhNmU4OTZkYWJsq0QM: ]] 00:15:34.918 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWE0ODUwMDk4NzE5NGRiOWI5NDlkZDBhNmU4OTZkYWJsq0QM: 00:15:34.918 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:15:34.918 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:34.918 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:34.918 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:34.918 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:34.918 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:34.918 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:34.918 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.918 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:34.918 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.918 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:34.918 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:34.918 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:34.918 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:34.918 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:34.918 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:34.918 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:34.918 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:34.918 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:34.918 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:34.918 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:34.918 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:34.918 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.918 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:35.177 nvme0n1 00:15:35.177 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.177 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:35.177 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.177 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:35.177 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:35.177 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.177 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.177 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:35.177 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.177 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:35.177 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.177 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:35.177 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:15:35.177 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:35.177 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:35.177 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:35.177 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:35.177 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjZmMWUxYTQ3YmZiZWE4NjM2NDI1ZjJhNzdlZDlmMTljNTMzNjMxMjg3NDVjYTA53JTdpw==: 00:15:35.177 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjIxY2Q2YTZiZGViZDI0ZDgyM2UzYzQ0MGViNTMxZWJvUMtB: 00:15:35.177 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:35.177 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:35.177 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjZmMWUxYTQ3YmZiZWE4NjM2NDI1ZjJhNzdlZDlmMTljNTMzNjMxMjg3NDVjYTA53JTdpw==: 00:15:35.177 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjIxY2Q2YTZiZGViZDI0ZDgyM2UzYzQ0MGViNTMxZWJvUMtB: ]] 00:15:35.177 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjIxY2Q2YTZiZGViZDI0ZDgyM2UzYzQ0MGViNTMxZWJvUMtB: 00:15:35.177 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:15:35.177 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:35.177 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:35.177 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:35.177 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:35.177 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:35.177 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:35.177 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.177 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:35.177 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.177 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:35.177 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:35.177 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:35.177 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:35.177 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:35.177 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:35.177 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:35.177 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:35.177 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:35.177 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:35.177 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:35.177 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:35.177 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.177 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:35.435 nvme0n1 00:15:35.435 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.435 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQxYTgwNWQ2MjAxOTNiOGYxMTkzODJlNDIwY2EwZmI0NWQ3NWE0ZDhkZGY1ZjU3ODQwMmY4MjIxNWE5YjYwZjfBZwo=: 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQxYTgwNWQ2MjAxOTNiOGYxMTkzODJlNDIwY2EwZmI0NWQ3NWE0ZDhkZGY1ZjU3ODQwMmY4MjIxNWE5YjYwZjfBZwo=: 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:35.436 nvme0n1 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjNiYTczNjJjNjIwODBjOGNkMzAxYWI2NWVmNDA5MTCU46W7: 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjI4ZjU1NTIwZTNkOTM0YTc2YTZhNzdjZDlkNjFkMjBlNDc0ZTg4YzgwM2M4MTQ5NDRkYzA5OTU2N2U4ZTUxNErXPwI=: 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjNiYTczNjJjNjIwODBjOGNkMzAxYWI2NWVmNDA5MTCU46W7: 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjI4ZjU1NTIwZTNkOTM0YTc2YTZhNzdjZDlkNjFkMjBlNDc0ZTg4YzgwM2M4MTQ5NDRkYzA5OTU2N2U4ZTUxNErXPwI=: ]] 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjI4ZjU1NTIwZTNkOTM0YTc2YTZhNzdjZDlkNjFkMjBlNDc0ZTg4YzgwM2M4MTQ5NDRkYzA5OTU2N2U4ZTUxNErXPwI=: 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.436 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:35.694 nvme0n1 00:15:35.694 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.694 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:35.694 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.694 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:35.694 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:35.694 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.694 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.694 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:35.694 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.694 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:35.694 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.694 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:35.694 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:15:35.694 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:35.694 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:35.694 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:35.694 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:35.694 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQzMWU1NzUxYzU1ZjFkOThiNTI5NTAxZjJlYzgwOWJiNTRjYWQ5NmE1NTJmZWMx4eq7yQ==: 00:15:35.694 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmQ2YzRlY2RiNWU1NTUyNjRlN2EwNjdhMjhkYzA1OGJhYTJlNTVmZGI3MzRmMzczCjrNXg==: 00:15:35.694 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:35.694 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:35.694 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQzMWU1NzUxYzU1ZjFkOThiNTI5NTAxZjJlYzgwOWJiNTRjYWQ5NmE1NTJmZWMx4eq7yQ==: 00:15:35.695 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmQ2YzRlY2RiNWU1NTUyNjRlN2EwNjdhMjhkYzA1OGJhYTJlNTVmZGI3MzRmMzczCjrNXg==: ]] 00:15:35.695 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmQ2YzRlY2RiNWU1NTUyNjRlN2EwNjdhMjhkYzA1OGJhYTJlNTVmZGI3MzRmMzczCjrNXg==: 00:15:35.695 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:15:35.695 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:35.695 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:35.695 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:35.695 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:35.695 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:35.695 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:35.695 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.695 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:35.695 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.695 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:35.695 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:35.695 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:35.695 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:35.695 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:35.695 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:35.695 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:35.695 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:35.695 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:35.695 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:35.695 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:35.695 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.695 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.695 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:35.953 nvme0n1 00:15:35.953 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.953 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:35.953 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:35.953 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.953 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:35.953 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.953 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.953 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:35.953 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.953 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:35.953 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.953 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:35.953 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:15:35.953 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:35.953 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:35.953 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:35.953 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:35.953 19:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODhkMGJhZDNjZGI0ZmIxYzg2NDJmNDEwMjUzNjdjNGbNjTNE: 00:15:35.953 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWE0ODUwMDk4NzE5NGRiOWI5NDlkZDBhNmU4OTZkYWJsq0QM: 00:15:35.953 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:35.953 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:35.953 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODhkMGJhZDNjZGI0ZmIxYzg2NDJmNDEwMjUzNjdjNGbNjTNE: 00:15:35.953 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWE0ODUwMDk4NzE5NGRiOWI5NDlkZDBhNmU4OTZkYWJsq0QM: ]] 00:15:35.953 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWE0ODUwMDk4NzE5NGRiOWI5NDlkZDBhNmU4OTZkYWJsq0QM: 00:15:35.953 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:15:35.953 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:35.953 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:35.953 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:35.953 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:35.953 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:35.953 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:35.953 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.953 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:35.954 nvme0n1 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjZmMWUxYTQ3YmZiZWE4NjM2NDI1ZjJhNzdlZDlmMTljNTMzNjMxMjg3NDVjYTA53JTdpw==: 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjIxY2Q2YTZiZGViZDI0ZDgyM2UzYzQ0MGViNTMxZWJvUMtB: 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjZmMWUxYTQ3YmZiZWE4NjM2NDI1ZjJhNzdlZDlmMTljNTMzNjMxMjg3NDVjYTA53JTdpw==: 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjIxY2Q2YTZiZGViZDI0ZDgyM2UzYzQ0MGViNTMxZWJvUMtB: ]] 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjIxY2Q2YTZiZGViZDI0ZDgyM2UzYzQ0MGViNTMxZWJvUMtB: 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.954 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:36.212 nvme0n1 00:15:36.212 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.212 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:36.212 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:36.212 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.212 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:36.212 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.212 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.212 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:36.212 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.212 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:36.212 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.212 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:36.212 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:15:36.212 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:36.212 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:36.212 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:15:36.212 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:36.212 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQxYTgwNWQ2MjAxOTNiOGYxMTkzODJlNDIwY2EwZmI0NWQ3NWE0ZDhkZGY1ZjU3ODQwMmY4MjIxNWE5YjYwZjfBZwo=: 00:15:36.212 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:36.212 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:36.212 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:15:36.212 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQxYTgwNWQ2MjAxOTNiOGYxMTkzODJlNDIwY2EwZmI0NWQ3NWE0ZDhkZGY1ZjU3ODQwMmY4MjIxNWE5YjYwZjfBZwo=: 00:15:36.212 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:36.212 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:15:36.212 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:36.212 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:36.212 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:15:36.212 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:36.212 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:36.212 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:36.212 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.212 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:36.213 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.213 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:36.213 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:36.213 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:36.213 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:36.213 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:36.213 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:36.213 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:36.213 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:36.213 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:36.213 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:36.213 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:36.213 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:36.213 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.213 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:36.470 nvme0n1 00:15:36.470 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.470 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:36.470 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:36.470 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.470 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:36.470 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.470 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.470 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:36.470 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.470 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:36.470 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.470 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:36.470 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:36.470 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:15:36.470 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:36.470 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:36.470 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:36.470 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:36.470 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjNiYTczNjJjNjIwODBjOGNkMzAxYWI2NWVmNDA5MTCU46W7: 00:15:36.470 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjI4ZjU1NTIwZTNkOTM0YTc2YTZhNzdjZDlkNjFkMjBlNDc0ZTg4YzgwM2M4MTQ5NDRkYzA5OTU2N2U4ZTUxNErXPwI=: 00:15:36.470 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:36.470 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:36.470 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjNiYTczNjJjNjIwODBjOGNkMzAxYWI2NWVmNDA5MTCU46W7: 00:15:36.470 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjI4ZjU1NTIwZTNkOTM0YTc2YTZhNzdjZDlkNjFkMjBlNDc0ZTg4YzgwM2M4MTQ5NDRkYzA5OTU2N2U4ZTUxNErXPwI=: ]] 00:15:36.470 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjI4ZjU1NTIwZTNkOTM0YTc2YTZhNzdjZDlkNjFkMjBlNDc0ZTg4YzgwM2M4MTQ5NDRkYzA5OTU2N2U4ZTUxNErXPwI=: 00:15:36.470 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:15:36.470 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:36.470 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:36.470 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:36.470 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:36.470 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:36.470 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:36.470 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.470 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:36.470 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.470 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:36.470 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:36.470 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:36.470 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:36.470 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:36.470 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:36.470 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:36.470 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:36.470 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:36.470 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:36.470 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:36.470 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:36.470 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.470 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:36.729 nvme0n1 00:15:36.729 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.729 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:36.729 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:36.729 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.729 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:36.729 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.729 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.729 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:36.729 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.729 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:36.729 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.729 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:36.729 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:15:36.729 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:36.729 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:36.729 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:36.729 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:36.729 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQzMWU1NzUxYzU1ZjFkOThiNTI5NTAxZjJlYzgwOWJiNTRjYWQ5NmE1NTJmZWMx4eq7yQ==: 00:15:36.729 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmQ2YzRlY2RiNWU1NTUyNjRlN2EwNjdhMjhkYzA1OGJhYTJlNTVmZGI3MzRmMzczCjrNXg==: 00:15:36.729 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:36.729 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:36.729 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQzMWU1NzUxYzU1ZjFkOThiNTI5NTAxZjJlYzgwOWJiNTRjYWQ5NmE1NTJmZWMx4eq7yQ==: 00:15:36.729 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmQ2YzRlY2RiNWU1NTUyNjRlN2EwNjdhMjhkYzA1OGJhYTJlNTVmZGI3MzRmMzczCjrNXg==: ]] 00:15:36.729 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmQ2YzRlY2RiNWU1NTUyNjRlN2EwNjdhMjhkYzA1OGJhYTJlNTVmZGI3MzRmMzczCjrNXg==: 00:15:36.729 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:15:36.729 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:36.729 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:36.729 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:36.729 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:36.729 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:36.729 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:36.729 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.729 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:36.729 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.729 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:36.729 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:36.729 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:36.729 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:36.729 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:36.729 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:36.729 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:36.729 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:36.729 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:36.729 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:36.729 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:36.729 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.729 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.729 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:36.729 nvme0n1 00:15:36.729 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.987 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:36.987 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.987 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:36.987 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:36.987 19:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.987 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.987 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:36.987 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.987 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:36.987 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.987 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:36.987 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:15:36.987 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:36.987 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:36.987 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:36.987 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:36.987 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODhkMGJhZDNjZGI0ZmIxYzg2NDJmNDEwMjUzNjdjNGbNjTNE: 00:15:36.987 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWE0ODUwMDk4NzE5NGRiOWI5NDlkZDBhNmU4OTZkYWJsq0QM: 00:15:36.987 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:36.987 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:36.987 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODhkMGJhZDNjZGI0ZmIxYzg2NDJmNDEwMjUzNjdjNGbNjTNE: 00:15:36.987 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWE0ODUwMDk4NzE5NGRiOWI5NDlkZDBhNmU4OTZkYWJsq0QM: ]] 00:15:36.987 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWE0ODUwMDk4NzE5NGRiOWI5NDlkZDBhNmU4OTZkYWJsq0QM: 00:15:36.987 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:15:36.987 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:36.987 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:36.987 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:36.987 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:36.987 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:36.987 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:36.987 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.987 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:36.987 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.987 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:36.987 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:36.987 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:36.987 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:36.987 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:36.987 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:36.987 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:36.987 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:36.987 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:36.987 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:36.987 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:36.987 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.987 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.987 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:36.987 nvme0n1 00:15:36.987 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.987 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:36.987 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:36.987 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.987 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:37.245 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.245 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.245 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:37.245 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.245 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:37.245 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.245 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:37.245 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:15:37.245 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:37.245 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:37.245 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:37.245 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:37.245 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjZmMWUxYTQ3YmZiZWE4NjM2NDI1ZjJhNzdlZDlmMTljNTMzNjMxMjg3NDVjYTA53JTdpw==: 00:15:37.245 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjIxY2Q2YTZiZGViZDI0ZDgyM2UzYzQ0MGViNTMxZWJvUMtB: 00:15:37.245 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:37.245 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:37.245 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjZmMWUxYTQ3YmZiZWE4NjM2NDI1ZjJhNzdlZDlmMTljNTMzNjMxMjg3NDVjYTA53JTdpw==: 00:15:37.245 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjIxY2Q2YTZiZGViZDI0ZDgyM2UzYzQ0MGViNTMxZWJvUMtB: ]] 00:15:37.245 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjIxY2Q2YTZiZGViZDI0ZDgyM2UzYzQ0MGViNTMxZWJvUMtB: 00:15:37.245 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:15:37.245 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:37.245 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:37.245 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:37.245 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:37.245 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:37.245 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:37.245 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.245 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:37.245 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.245 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:37.245 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:37.245 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:37.245 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:37.245 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:37.245 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:37.245 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:37.245 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:37.245 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:37.245 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:37.246 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:37.246 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:37.246 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.246 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:37.246 nvme0n1 00:15:37.246 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.246 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:37.246 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:37.246 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.246 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:37.246 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.502 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.502 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:37.502 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.502 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:37.502 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.503 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:37.503 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:15:37.503 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:37.503 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:37.503 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:15:37.503 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:37.503 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQxYTgwNWQ2MjAxOTNiOGYxMTkzODJlNDIwY2EwZmI0NWQ3NWE0ZDhkZGY1ZjU3ODQwMmY4MjIxNWE5YjYwZjfBZwo=: 00:15:37.503 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:37.503 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:37.503 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:15:37.503 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQxYTgwNWQ2MjAxOTNiOGYxMTkzODJlNDIwY2EwZmI0NWQ3NWE0ZDhkZGY1ZjU3ODQwMmY4MjIxNWE5YjYwZjfBZwo=: 00:15:37.503 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:37.503 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:15:37.503 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:37.503 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:37.503 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:15:37.503 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:37.503 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:37.503 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:37.503 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.503 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:37.503 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.503 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:37.503 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:37.503 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:37.503 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:37.503 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:37.503 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:37.503 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:37.503 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:37.503 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:37.503 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:37.503 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:37.503 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:37.503 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.503 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:37.503 nvme0n1 00:15:37.503 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.503 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:37.503 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:37.503 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.503 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:37.503 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.503 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.503 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:37.503 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.760 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:37.760 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.760 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:37.760 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:37.760 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:15:37.760 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:37.760 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:37.760 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:37.760 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:37.760 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjNiYTczNjJjNjIwODBjOGNkMzAxYWI2NWVmNDA5MTCU46W7: 00:15:37.760 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjI4ZjU1NTIwZTNkOTM0YTc2YTZhNzdjZDlkNjFkMjBlNDc0ZTg4YzgwM2M4MTQ5NDRkYzA5OTU2N2U4ZTUxNErXPwI=: 00:15:37.760 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:37.760 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:37.760 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjNiYTczNjJjNjIwODBjOGNkMzAxYWI2NWVmNDA5MTCU46W7: 00:15:37.760 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjI4ZjU1NTIwZTNkOTM0YTc2YTZhNzdjZDlkNjFkMjBlNDc0ZTg4YzgwM2M4MTQ5NDRkYzA5OTU2N2U4ZTUxNErXPwI=: ]] 00:15:37.760 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjI4ZjU1NTIwZTNkOTM0YTc2YTZhNzdjZDlkNjFkMjBlNDc0ZTg4YzgwM2M4MTQ5NDRkYzA5OTU2N2U4ZTUxNErXPwI=: 00:15:37.760 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:15:37.760 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:37.760 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:37.760 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:37.760 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:37.760 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:37.760 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:37.760 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.760 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:37.760 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.760 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:37.760 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:37.760 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:37.760 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:37.760 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:37.760 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:37.760 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:37.760 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:37.760 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:37.760 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:37.760 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:37.760 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:37.760 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.760 19:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:38.018 nvme0n1 00:15:38.018 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.018 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:38.018 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:38.018 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.018 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:38.018 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.018 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.018 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:38.018 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.019 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:38.019 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.019 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:38.019 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:15:38.019 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:38.019 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:38.019 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:38.019 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:38.019 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQzMWU1NzUxYzU1ZjFkOThiNTI5NTAxZjJlYzgwOWJiNTRjYWQ5NmE1NTJmZWMx4eq7yQ==: 00:15:38.019 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmQ2YzRlY2RiNWU1NTUyNjRlN2EwNjdhMjhkYzA1OGJhYTJlNTVmZGI3MzRmMzczCjrNXg==: 00:15:38.019 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:38.019 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:38.019 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQzMWU1NzUxYzU1ZjFkOThiNTI5NTAxZjJlYzgwOWJiNTRjYWQ5NmE1NTJmZWMx4eq7yQ==: 00:15:38.019 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmQ2YzRlY2RiNWU1NTUyNjRlN2EwNjdhMjhkYzA1OGJhYTJlNTVmZGI3MzRmMzczCjrNXg==: ]] 00:15:38.019 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmQ2YzRlY2RiNWU1NTUyNjRlN2EwNjdhMjhkYzA1OGJhYTJlNTVmZGI3MzRmMzczCjrNXg==: 00:15:38.019 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:15:38.019 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:38.019 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:38.019 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:38.019 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:38.019 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:38.019 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:38.019 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.019 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:38.019 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.019 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:38.019 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:38.019 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:38.019 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:38.019 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:38.019 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:38.019 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:38.019 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:38.019 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:38.019 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:38.019 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:38.019 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:38.019 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.019 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:38.585 nvme0n1 00:15:38.585 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.585 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:38.585 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:38.585 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.585 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:38.585 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.585 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.585 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:38.585 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.585 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:38.585 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.585 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:38.585 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:15:38.585 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:38.585 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:38.585 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:38.585 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:38.585 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODhkMGJhZDNjZGI0ZmIxYzg2NDJmNDEwMjUzNjdjNGbNjTNE: 00:15:38.585 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWE0ODUwMDk4NzE5NGRiOWI5NDlkZDBhNmU4OTZkYWJsq0QM: 00:15:38.585 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:38.585 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:38.585 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODhkMGJhZDNjZGI0ZmIxYzg2NDJmNDEwMjUzNjdjNGbNjTNE: 00:15:38.585 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWE0ODUwMDk4NzE5NGRiOWI5NDlkZDBhNmU4OTZkYWJsq0QM: ]] 00:15:38.585 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWE0ODUwMDk4NzE5NGRiOWI5NDlkZDBhNmU4OTZkYWJsq0QM: 00:15:38.585 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:15:38.585 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:38.585 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:38.585 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:38.585 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:38.585 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:38.585 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:38.585 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.585 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:38.585 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.585 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:38.585 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:38.585 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:38.585 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:38.585 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:38.585 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:38.585 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:38.585 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:38.585 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:38.585 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:38.585 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:38.585 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.585 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.586 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:38.845 nvme0n1 00:15:38.845 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.845 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:38.845 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:38.845 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.845 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:38.845 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.845 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.845 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:38.845 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.845 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:38.845 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.845 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:38.845 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:15:38.845 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:38.845 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:38.845 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:38.845 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:38.845 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjZmMWUxYTQ3YmZiZWE4NjM2NDI1ZjJhNzdlZDlmMTljNTMzNjMxMjg3NDVjYTA53JTdpw==: 00:15:38.845 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjIxY2Q2YTZiZGViZDI0ZDgyM2UzYzQ0MGViNTMxZWJvUMtB: 00:15:38.845 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:38.845 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:38.845 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjZmMWUxYTQ3YmZiZWE4NjM2NDI1ZjJhNzdlZDlmMTljNTMzNjMxMjg3NDVjYTA53JTdpw==: 00:15:38.845 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjIxY2Q2YTZiZGViZDI0ZDgyM2UzYzQ0MGViNTMxZWJvUMtB: ]] 00:15:38.845 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjIxY2Q2YTZiZGViZDI0ZDgyM2UzYzQ0MGViNTMxZWJvUMtB: 00:15:38.845 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:15:38.845 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:38.845 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:38.845 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:38.845 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:38.845 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:38.845 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:38.845 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.845 19:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:38.845 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.845 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:38.845 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:38.845 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:38.845 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:38.845 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:38.845 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:38.845 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:38.845 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:38.845 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:38.845 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:38.845 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:38.845 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:38.845 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.845 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:39.411 nvme0n1 00:15:39.411 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.411 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:39.411 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.411 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:39.411 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:39.411 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.411 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.411 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:39.411 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.411 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:39.411 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.411 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:39.411 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:15:39.411 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:39.411 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:39.411 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:15:39.411 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:39.411 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQxYTgwNWQ2MjAxOTNiOGYxMTkzODJlNDIwY2EwZmI0NWQ3NWE0ZDhkZGY1ZjU3ODQwMmY4MjIxNWE5YjYwZjfBZwo=: 00:15:39.411 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:39.411 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:39.411 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:15:39.411 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQxYTgwNWQ2MjAxOTNiOGYxMTkzODJlNDIwY2EwZmI0NWQ3NWE0ZDhkZGY1ZjU3ODQwMmY4MjIxNWE5YjYwZjfBZwo=: 00:15:39.411 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:39.411 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:15:39.411 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:39.411 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:39.411 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:15:39.411 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:39.411 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:39.411 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:39.411 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.411 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:39.411 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.411 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:39.411 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:39.411 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:39.411 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:39.411 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:39.411 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:39.411 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:39.411 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:39.411 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:39.411 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:39.411 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:39.411 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:39.411 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.411 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:39.669 nvme0n1 00:15:39.669 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.669 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:39.669 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.669 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:39.669 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:39.669 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.669 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.669 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:39.669 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.669 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:39.669 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.669 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:15:39.669 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:39.669 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:15:39.669 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:39.669 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:39.669 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:39.669 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:15:39.669 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjNiYTczNjJjNjIwODBjOGNkMzAxYWI2NWVmNDA5MTCU46W7: 00:15:39.670 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjI4ZjU1NTIwZTNkOTM0YTc2YTZhNzdjZDlkNjFkMjBlNDc0ZTg4YzgwM2M4MTQ5NDRkYzA5OTU2N2U4ZTUxNErXPwI=: 00:15:39.670 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:39.670 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:39.670 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjNiYTczNjJjNjIwODBjOGNkMzAxYWI2NWVmNDA5MTCU46W7: 00:15:39.670 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjI4ZjU1NTIwZTNkOTM0YTc2YTZhNzdjZDlkNjFkMjBlNDc0ZTg4YzgwM2M4MTQ5NDRkYzA5OTU2N2U4ZTUxNErXPwI=: ]] 00:15:39.670 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjI4ZjU1NTIwZTNkOTM0YTc2YTZhNzdjZDlkNjFkMjBlNDc0ZTg4YzgwM2M4MTQ5NDRkYzA5OTU2N2U4ZTUxNErXPwI=: 00:15:39.670 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:15:39.670 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:39.670 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:39.670 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:39.670 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:15:39.670 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:39.670 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:39.670 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.670 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:39.670 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.670 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:39.670 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:39.670 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:39.670 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:39.670 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:39.670 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:39.670 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:39.670 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:39.670 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:39.670 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:39.670 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:39.670 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.670 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.670 19:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:40.236 nvme0n1 00:15:40.236 19:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.236 19:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:40.236 19:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:40.236 19:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.236 19:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:40.236 19:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.495 19:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.495 19:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:40.495 19:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.495 19:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:40.495 19:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.495 19:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:40.495 19:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:15:40.495 19:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:40.495 19:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:40.495 19:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:40.495 19:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:40.495 19:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQzMWU1NzUxYzU1ZjFkOThiNTI5NTAxZjJlYzgwOWJiNTRjYWQ5NmE1NTJmZWMx4eq7yQ==: 00:15:40.495 19:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmQ2YzRlY2RiNWU1NTUyNjRlN2EwNjdhMjhkYzA1OGJhYTJlNTVmZGI3MzRmMzczCjrNXg==: 00:15:40.495 19:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:40.495 19:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:40.495 19:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQzMWU1NzUxYzU1ZjFkOThiNTI5NTAxZjJlYzgwOWJiNTRjYWQ5NmE1NTJmZWMx4eq7yQ==: 00:15:40.495 19:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmQ2YzRlY2RiNWU1NTUyNjRlN2EwNjdhMjhkYzA1OGJhYTJlNTVmZGI3MzRmMzczCjrNXg==: ]] 00:15:40.495 19:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmQ2YzRlY2RiNWU1NTUyNjRlN2EwNjdhMjhkYzA1OGJhYTJlNTVmZGI3MzRmMzczCjrNXg==: 00:15:40.495 19:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:15:40.495 19:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:40.495 19:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:40.495 19:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:40.495 19:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:15:40.495 19:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:40.495 19:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:40.495 19:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.495 19:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:40.495 19:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.495 19:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:40.495 19:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:40.495 19:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:40.495 19:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:40.495 19:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:40.495 19:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:40.495 19:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:40.495 19:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:40.495 19:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:40.495 19:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:40.495 19:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:40.495 19:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:40.495 19:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.495 19:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:41.062 nvme0n1 00:15:41.062 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.062 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:41.062 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.062 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:41.062 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:41.062 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.062 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.062 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:41.062 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.062 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:41.062 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.062 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:41.062 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:15:41.062 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:41.062 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:41.062 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:41.062 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:41.062 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODhkMGJhZDNjZGI0ZmIxYzg2NDJmNDEwMjUzNjdjNGbNjTNE: 00:15:41.062 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWE0ODUwMDk4NzE5NGRiOWI5NDlkZDBhNmU4OTZkYWJsq0QM: 00:15:41.062 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:41.062 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:41.062 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODhkMGJhZDNjZGI0ZmIxYzg2NDJmNDEwMjUzNjdjNGbNjTNE: 00:15:41.062 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWE0ODUwMDk4NzE5NGRiOWI5NDlkZDBhNmU4OTZkYWJsq0QM: ]] 00:15:41.062 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWE0ODUwMDk4NzE5NGRiOWI5NDlkZDBhNmU4OTZkYWJsq0QM: 00:15:41.062 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:15:41.062 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:41.062 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:41.062 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:41.062 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:15:41.062 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:41.062 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:41.062 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.062 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:41.062 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.062 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:41.062 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:41.062 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:41.062 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:41.062 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:41.062 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:41.062 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:41.062 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:41.062 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:41.062 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:41.062 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:41.062 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:41.062 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.062 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:41.629 nvme0n1 00:15:41.629 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.629 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:41.629 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:41.629 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.629 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:41.629 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.629 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.629 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:41.629 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.629 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:41.629 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.629 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:41.629 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:15:41.629 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:41.629 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:41.629 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:41.629 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:15:41.629 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjZmMWUxYTQ3YmZiZWE4NjM2NDI1ZjJhNzdlZDlmMTljNTMzNjMxMjg3NDVjYTA53JTdpw==: 00:15:41.629 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjIxY2Q2YTZiZGViZDI0ZDgyM2UzYzQ0MGViNTMxZWJvUMtB: 00:15:41.629 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:41.629 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:41.629 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjZmMWUxYTQ3YmZiZWE4NjM2NDI1ZjJhNzdlZDlmMTljNTMzNjMxMjg3NDVjYTA53JTdpw==: 00:15:41.629 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjIxY2Q2YTZiZGViZDI0ZDgyM2UzYzQ0MGViNTMxZWJvUMtB: ]] 00:15:41.629 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjIxY2Q2YTZiZGViZDI0ZDgyM2UzYzQ0MGViNTMxZWJvUMtB: 00:15:41.629 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:15:41.629 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:41.629 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:41.630 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:41.630 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:15:41.630 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:41.630 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:41.630 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.630 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:41.888 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.888 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:41.888 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:41.888 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:41.888 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:41.888 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:41.888 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:41.888 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:41.888 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:41.888 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:41.888 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:41.888 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:41.888 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:15:41.888 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.888 19:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:42.146 nvme0n1 00:15:42.146 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.146 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:42.146 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:42.146 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.146 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:42.404 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.404 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.404 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:42.404 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.404 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:42.404 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.404 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:15:42.404 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:15:42.404 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:42.404 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:15:42.404 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:15:42.404 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:15:42.404 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQxYTgwNWQ2MjAxOTNiOGYxMTkzODJlNDIwY2EwZmI0NWQ3NWE0ZDhkZGY1ZjU3ODQwMmY4MjIxNWE5YjYwZjfBZwo=: 00:15:42.404 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:15:42.404 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:15:42.404 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:15:42.404 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQxYTgwNWQ2MjAxOTNiOGYxMTkzODJlNDIwY2EwZmI0NWQ3NWE0ZDhkZGY1ZjU3ODQwMmY4MjIxNWE5YjYwZjfBZwo=: 00:15:42.404 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:15:42.404 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:15:42.404 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:15:42.404 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:15:42.404 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:15:42.404 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:15:42.404 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:15:42.404 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:42.404 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.404 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:42.404 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.404 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:15:42.404 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:42.404 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:42.404 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:42.404 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:42.404 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:42.404 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:42.404 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:42.404 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:42.404 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:42.404 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:42.404 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:15:42.404 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.404 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:42.971 nvme0n1 00:15:42.971 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.971 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:15:42.971 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.971 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:15:42.971 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:42.971 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.971 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.971 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:42.971 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.971 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:42.971 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.971 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:15:42.971 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:42.971 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:42.971 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:42.971 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:42.971 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQzMWU1NzUxYzU1ZjFkOThiNTI5NTAxZjJlYzgwOWJiNTRjYWQ5NmE1NTJmZWMx4eq7yQ==: 00:15:42.971 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmQ2YzRlY2RiNWU1NTUyNjRlN2EwNjdhMjhkYzA1OGJhYTJlNTVmZGI3MzRmMzczCjrNXg==: 00:15:42.971 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:42.971 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:42.971 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQzMWU1NzUxYzU1ZjFkOThiNTI5NTAxZjJlYzgwOWJiNTRjYWQ5NmE1NTJmZWMx4eq7yQ==: 00:15:42.971 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmQ2YzRlY2RiNWU1NTUyNjRlN2EwNjdhMjhkYzA1OGJhYTJlNTVmZGI3MzRmMzczCjrNXg==: ]] 00:15:42.971 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmQ2YzRlY2RiNWU1NTUyNjRlN2EwNjdhMjhkYzA1OGJhYTJlNTVmZGI3MzRmMzczCjrNXg==: 00:15:42.971 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:42.971 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.971 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:42.971 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.971 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:15:42.971 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:42.971 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:42.971 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:42.971 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:42.971 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:42.971 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:42.971 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:42.971 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:42.971 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:42.971 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:42.971 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:15:42.971 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:15:42.971 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:15:42.971 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:42.971 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:42.971 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:42.971 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:42.971 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:15:42.971 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.971 19:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:42.971 request: 00:15:42.971 { 00:15:42.971 "name": "nvme0", 00:15:42.971 "trtype": "tcp", 00:15:42.971 "traddr": "10.0.0.1", 00:15:42.972 "adrfam": "ipv4", 00:15:42.972 "trsvcid": "4420", 00:15:42.972 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:15:42.972 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:15:42.972 "prchk_reftag": false, 00:15:42.972 "prchk_guard": false, 00:15:42.972 "hdgst": false, 00:15:42.972 "ddgst": false, 00:15:42.972 "allow_unrecognized_csi": false, 00:15:42.972 "method": "bdev_nvme_attach_controller", 00:15:42.972 "req_id": 1 00:15:42.972 } 00:15:42.972 Got JSON-RPC error response 00:15:42.972 response: 00:15:42.972 { 00:15:42.972 "code": -5, 00:15:42.972 "message": "Input/output error" 00:15:42.972 } 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:42.972 request: 00:15:42.972 { 00:15:42.972 "name": "nvme0", 00:15:42.972 "trtype": "tcp", 00:15:42.972 "traddr": "10.0.0.1", 00:15:42.972 "adrfam": "ipv4", 00:15:42.972 "trsvcid": "4420", 00:15:42.972 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:15:42.972 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:15:42.972 "prchk_reftag": false, 00:15:42.972 "prchk_guard": false, 00:15:42.972 "hdgst": false, 00:15:42.972 "ddgst": false, 00:15:42.972 "dhchap_key": "key2", 00:15:42.972 "allow_unrecognized_csi": false, 00:15:42.972 "method": "bdev_nvme_attach_controller", 00:15:42.972 "req_id": 1 00:15:42.972 } 00:15:42.972 Got JSON-RPC error response 00:15:42.972 response: 00:15:42.972 { 00:15:42.972 "code": -5, 00:15:42.972 "message": "Input/output error" 00:15:42.972 } 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.972 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:42.972 request: 00:15:42.972 { 00:15:42.972 "name": "nvme0", 00:15:42.972 "trtype": "tcp", 00:15:42.972 "traddr": "10.0.0.1", 00:15:42.972 "adrfam": "ipv4", 00:15:42.972 "trsvcid": "4420", 00:15:42.972 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:15:42.973 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:15:42.973 "prchk_reftag": false, 00:15:42.973 "prchk_guard": false, 00:15:42.973 "hdgst": false, 00:15:42.973 "ddgst": false, 00:15:42.973 "dhchap_key": "key1", 00:15:42.973 "dhchap_ctrlr_key": "ckey2", 00:15:42.973 "allow_unrecognized_csi": false, 00:15:42.973 "method": "bdev_nvme_attach_controller", 00:15:42.973 "req_id": 1 00:15:42.973 } 00:15:42.973 Got JSON-RPC error response 00:15:42.973 response: 00:15:42.973 { 00:15:42.973 "code": -5, 00:15:42.973 "message": "Input/output error" 00:15:42.973 } 00:15:42.973 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:42.973 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:15:42.973 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:42.973 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:42.973 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:42.973 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:15:42.973 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:42.973 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:42.973 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:42.973 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:42.973 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:42.973 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:42.973 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:42.973 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:42.973 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:42.973 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:42.973 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:42.973 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.973 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:43.232 nvme0n1 00:15:43.232 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.232 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:15:43.232 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:43.232 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:43.232 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:43.232 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:43.232 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODhkMGJhZDNjZGI0ZmIxYzg2NDJmNDEwMjUzNjdjNGbNjTNE: 00:15:43.232 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWE0ODUwMDk4NzE5NGRiOWI5NDlkZDBhNmU4OTZkYWJsq0QM: 00:15:43.232 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:43.232 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:43.232 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODhkMGJhZDNjZGI0ZmIxYzg2NDJmNDEwMjUzNjdjNGbNjTNE: 00:15:43.232 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWE0ODUwMDk4NzE5NGRiOWI5NDlkZDBhNmU4OTZkYWJsq0QM: ]] 00:15:43.232 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWE0ODUwMDk4NzE5NGRiOWI5NDlkZDBhNmU4OTZkYWJsq0QM: 00:15:43.232 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.232 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.232 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:43.232 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.232 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:15:43.232 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:15:43.232 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.232 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:43.232 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.232 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.232 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:43.232 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:15:43.232 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:43.232 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:43.232 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:43.232 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:43.232 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:43.232 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:43.232 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.232 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:43.232 request: 00:15:43.232 { 00:15:43.232 "name": "nvme0", 00:15:43.232 "dhchap_key": "key1", 00:15:43.232 "dhchap_ctrlr_key": "ckey2", 00:15:43.232 "method": "bdev_nvme_set_keys", 00:15:43.232 "req_id": 1 00:15:43.232 } 00:15:43.232 Got JSON-RPC error response 00:15:43.232 response: 00:15:43.232 { 00:15:43.232 "code": -13, 00:15:43.232 "message": "Permission denied" 00:15:43.232 } 00:15:43.232 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:43.232 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:15:43.232 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:43.232 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:43.232 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:43.232 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:15:43.232 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.232 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:15:43.232 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:43.232 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.232 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:15:43.232 19:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:15:44.165 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:15:44.165 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.165 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:44.165 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:15:44.165 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.165 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:15:44.165 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:15:44.165 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:44.165 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:44.165 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:44.165 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:15:44.165 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQzMWU1NzUxYzU1ZjFkOThiNTI5NTAxZjJlYzgwOWJiNTRjYWQ5NmE1NTJmZWMx4eq7yQ==: 00:15:44.165 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmQ2YzRlY2RiNWU1NTUyNjRlN2EwNjdhMjhkYzA1OGJhYTJlNTVmZGI3MzRmMzczCjrNXg==: 00:15:44.165 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:44.165 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:44.165 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQzMWU1NzUxYzU1ZjFkOThiNTI5NTAxZjJlYzgwOWJiNTRjYWQ5NmE1NTJmZWMx4eq7yQ==: 00:15:44.165 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmQ2YzRlY2RiNWU1NTUyNjRlN2EwNjdhMjhkYzA1OGJhYTJlNTVmZGI3MzRmMzczCjrNXg==: ]] 00:15:44.165 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmQ2YzRlY2RiNWU1NTUyNjRlN2EwNjdhMjhkYzA1OGJhYTJlNTVmZGI3MzRmMzczCjrNXg==: 00:15:44.165 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:15:44.165 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:15:44.165 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:44.165 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:44.165 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:44.165 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:44.165 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:44.165 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:44.165 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:44.165 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:44.165 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:44.165 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:44.165 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.165 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:44.424 nvme0n1 00:15:44.424 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.424 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:15:44.424 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:15:44.424 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:15:44.424 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:15:44.424 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:15:44.424 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODhkMGJhZDNjZGI0ZmIxYzg2NDJmNDEwMjUzNjdjNGbNjTNE: 00:15:44.424 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWE0ODUwMDk4NzE5NGRiOWI5NDlkZDBhNmU4OTZkYWJsq0QM: 00:15:44.424 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:15:44.424 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:15:44.424 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODhkMGJhZDNjZGI0ZmIxYzg2NDJmNDEwMjUzNjdjNGbNjTNE: 00:15:44.424 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWE0ODUwMDk4NzE5NGRiOWI5NDlkZDBhNmU4OTZkYWJsq0QM: ]] 00:15:44.424 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWE0ODUwMDk4NzE5NGRiOWI5NDlkZDBhNmU4OTZkYWJsq0QM: 00:15:44.424 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:15:44.424 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:15:44.424 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:15:44.424 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:44.424 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:44.424 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:44.424 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:44.424 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:15:44.424 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.424 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:44.424 request: 00:15:44.424 { 00:15:44.424 "name": "nvme0", 00:15:44.424 "dhchap_key": "key2", 00:15:44.424 "dhchap_ctrlr_key": "ckey1", 00:15:44.424 "method": "bdev_nvme_set_keys", 00:15:44.424 "req_id": 1 00:15:44.424 } 00:15:44.424 Got JSON-RPC error response 00:15:44.424 response: 00:15:44.424 { 00:15:44.424 "code": -13, 00:15:44.424 "message": "Permission denied" 00:15:44.424 } 00:15:44.424 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:44.424 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:15:44.424 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:44.424 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:44.424 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:44.424 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:15:44.424 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.424 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:44.424 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:15:44.424 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.424 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:15:44.424 19:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:15:45.376 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:15:45.376 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:15:45.376 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.376 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:45.376 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.376 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:15:45.376 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:15:45.376 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:15:45.376 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:15:45.376 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:45.376 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:15:45.376 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:45.376 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:15:45.376 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:45.376 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:45.376 rmmod nvme_tcp 00:15:45.376 rmmod nvme_fabrics 00:15:45.376 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:45.376 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:15:45.376 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:15:45.376 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 76838 ']' 00:15:45.376 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 76838 00:15:45.376 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 76838 ']' 00:15:45.376 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 76838 00:15:45.376 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:15:45.376 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:45.376 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76838 00:15:45.376 killing process with pid 76838 00:15:45.376 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:45.376 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:45.376 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76838' 00:15:45.376 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 76838 00:15:45.376 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 76838 00:15:45.643 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:45.643 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:45.643 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:45.643 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:15:45.643 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:15:45.643 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:45.643 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:15:45.643 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:45.643 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:45.643 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:45.643 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:45.643 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:45.643 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:45.643 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:45.643 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:45.643 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:45.643 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:45.643 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:45.643 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:45.643 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:45.643 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:45.643 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:45.901 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:45.901 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:45.901 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:45.901 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.901 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:15:45.901 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:15:45.901 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:15:45.901 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:15:45.901 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:15:45.901 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:15:45.901 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:15:45.901 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:15:45.901 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:15:45.901 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:15:45.901 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:15:45.901 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:15:45.901 19:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:46.467 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:46.467 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:46.467 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:46.467 19:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.MA9 /tmp/spdk.key-null.JnG /tmp/spdk.key-sha256.sLf /tmp/spdk.key-sha384.won /tmp/spdk.key-sha512.sI6 /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:15:46.467 19:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:46.725 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:46.725 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:46.725 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:46.725 00:15:46.725 real 0m38.911s 00:15:46.725 user 0m30.871s 00:15:46.725 sys 0m3.180s 00:15:46.725 19:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:46.725 19:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:15:46.725 ************************************ 00:15:46.725 END TEST nvmf_auth_host 00:15:46.725 ************************************ 00:15:46.984 19:49:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:15:46.984 19:49:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:15:46.984 19:49:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:46.984 19:49:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:46.984 19:49:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:46.984 ************************************ 00:15:46.984 START TEST nvmf_digest 00:15:46.984 ************************************ 00:15:46.984 19:49:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:15:46.984 * Looking for test storage... 00:15:46.984 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:46.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.984 --rc genhtml_branch_coverage=1 00:15:46.984 --rc genhtml_function_coverage=1 00:15:46.984 --rc genhtml_legend=1 00:15:46.984 --rc geninfo_all_blocks=1 00:15:46.984 --rc geninfo_unexecuted_blocks=1 00:15:46.984 00:15:46.984 ' 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:46.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.984 --rc genhtml_branch_coverage=1 00:15:46.984 --rc genhtml_function_coverage=1 00:15:46.984 --rc genhtml_legend=1 00:15:46.984 --rc geninfo_all_blocks=1 00:15:46.984 --rc geninfo_unexecuted_blocks=1 00:15:46.984 00:15:46.984 ' 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:46.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.984 --rc genhtml_branch_coverage=1 00:15:46.984 --rc genhtml_function_coverage=1 00:15:46.984 --rc genhtml_legend=1 00:15:46.984 --rc geninfo_all_blocks=1 00:15:46.984 --rc geninfo_unexecuted_blocks=1 00:15:46.984 00:15:46.984 ' 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:46.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.984 --rc genhtml_branch_coverage=1 00:15:46.984 --rc genhtml_function_coverage=1 00:15:46.984 --rc genhtml_legend=1 00:15:46.984 --rc geninfo_all_blocks=1 00:15:46.984 --rc geninfo_unexecuted_blocks=1 00:15:46.984 00:15:46.984 ' 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=91838eb1-5852-43eb-90b2-09876f360ab2 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:46.984 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:46.985 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:46.985 Cannot find device "nvmf_init_br" 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:46.985 Cannot find device "nvmf_init_br2" 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:46.985 Cannot find device "nvmf_tgt_br" 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:46.985 Cannot find device "nvmf_tgt_br2" 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:46.985 Cannot find device "nvmf_init_br" 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:46.985 Cannot find device "nvmf_init_br2" 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:46.985 Cannot find device "nvmf_tgt_br" 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:46.985 Cannot find device "nvmf_tgt_br2" 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:46.985 Cannot find device "nvmf_br" 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:46.985 Cannot find device "nvmf_init_if" 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:46.985 Cannot find device "nvmf_init_if2" 00:15:46.985 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:15:47.243 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:47.243 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:47.243 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:15:47.243 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:47.243 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:47.243 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:15:47.243 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:47.243 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:47.243 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:47.243 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:47.243 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:47.243 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:47.243 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:47.243 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:47.243 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:47.243 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:47.243 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:47.243 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:47.243 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:47.243 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:47.243 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:47.243 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:47.243 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:47.243 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:47.243 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:47.243 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:47.243 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:47.243 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:47.243 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:47.243 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:47.243 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:47.243 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:47.243 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:47.243 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:47.243 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:47.243 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:47.243 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:47.243 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:47.243 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:47.243 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:47.243 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:15:47.243 00:15:47.243 --- 10.0.0.3 ping statistics --- 00:15:47.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:47.243 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:15:47.243 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:47.243 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:47.243 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.030 ms 00:15:47.243 00:15:47.243 --- 10.0.0.4 ping statistics --- 00:15:47.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:47.243 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:15:47.243 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:47.243 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:47.243 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:15:47.243 00:15:47.243 --- 10.0.0.1 ping statistics --- 00:15:47.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:47.243 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:15:47.243 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:47.243 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:47.243 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:15:47.243 00:15:47.243 --- 10.0.0.2 ping statistics --- 00:15:47.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:47.243 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:15:47.243 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:47.243 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:15:47.243 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:47.243 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:47.243 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:47.243 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:47.243 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:47.244 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:47.244 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:47.244 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:47.244 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:15:47.244 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:15:47.244 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:47.244 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:47.244 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:15:47.244 ************************************ 00:15:47.244 START TEST nvmf_digest_clean 00:15:47.244 ************************************ 00:15:47.244 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:15:47.244 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:15:47.244 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:15:47.244 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:15:47.244 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:15:47.244 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:15:47.244 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:47.244 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:47.244 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:15:47.244 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=78509 00:15:47.244 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 78509 00:15:47.244 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 78509 ']' 00:15:47.244 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.244 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:15:47.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.244 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:47.244 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.244 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:47.244 19:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:15:47.244 [2024-11-26 19:49:42.470709] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:15:47.244 [2024-11-26 19:49:42.470792] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:47.502 [2024-11-26 19:49:42.610913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.502 [2024-11-26 19:49:42.644031] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:47.502 [2024-11-26 19:49:42.644071] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:47.502 [2024-11-26 19:49:42.644078] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:47.502 [2024-11-26 19:49:42.644083] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:47.502 [2024-11-26 19:49:42.644087] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:47.502 [2024-11-26 19:49:42.644337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:48.434 19:49:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:48.434 19:49:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:15:48.434 19:49:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:48.434 19:49:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:48.434 19:49:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:15:48.434 19:49:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:48.434 19:49:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:15:48.434 19:49:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:15:48.434 19:49:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:15:48.434 19:49:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.434 19:49:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:15:48.434 [2024-11-26 19:49:43.414024] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:48.435 null0 00:15:48.435 [2024-11-26 19:49:43.454420] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:48.435 [2024-11-26 19:49:43.478502] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:48.435 19:49:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.435 19:49:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:15:48.435 19:49:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:15:48.435 19:49:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:15:48.435 19:49:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:15:48.435 19:49:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:15:48.435 19:49:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:15:48.435 19:49:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:15:48.435 19:49:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=78541 00:15:48.435 19:49:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 78541 /var/tmp/bperf.sock 00:15:48.435 19:49:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 78541 ']' 00:15:48.435 19:49:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:15:48.435 19:49:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:15:48.435 19:49:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:48.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:15:48.435 19:49:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:15:48.435 19:49:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:48.435 19:49:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:15:48.435 [2024-11-26 19:49:43.522955] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:15:48.435 [2024-11-26 19:49:43.523025] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78541 ] 00:15:48.435 [2024-11-26 19:49:43.660013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:48.692 [2024-11-26 19:49:43.695532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:49.259 19:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:49.259 19:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:15:49.259 19:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:15:49.259 19:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:15:49.259 19:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:15:49.516 [2024-11-26 19:49:44.609367] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:49.516 19:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:15:49.516 19:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:15:49.775 nvme0n1 00:15:49.775 19:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:15:49.775 19:49:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:15:50.032 Running I/O for 2 seconds... 00:15:51.903 15367.00 IOPS, 60.03 MiB/s [2024-11-26T19:49:47.150Z] 17081.50 IOPS, 66.72 MiB/s 00:15:51.903 Latency(us) 00:15:51.903 [2024-11-26T19:49:47.150Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:51.903 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:15:51.903 nvme0n1 : 2.01 17135.35 66.93 0.00 0.00 7466.22 6099.89 23996.26 00:15:51.903 [2024-11-26T19:49:47.150Z] =================================================================================================================== 00:15:51.903 [2024-11-26T19:49:47.150Z] Total : 17135.35 66.93 0.00 0.00 7466.22 6099.89 23996.26 00:15:51.903 { 00:15:51.903 "results": [ 00:15:51.903 { 00:15:51.903 "job": "nvme0n1", 00:15:51.903 "core_mask": "0x2", 00:15:51.903 "workload": "randread", 00:15:51.903 "status": "finished", 00:15:51.903 "queue_depth": 128, 00:15:51.903 "io_size": 4096, 00:15:51.903 "runtime": 2.008596, 00:15:51.903 "iops": 17135.352256003694, 00:15:51.903 "mibps": 66.93496975001443, 00:15:51.903 "io_failed": 0, 00:15:51.903 "io_timeout": 0, 00:15:51.903 "avg_latency_us": 7466.223447659319, 00:15:51.903 "min_latency_us": 6099.88923076923, 00:15:51.903 "max_latency_us": 23996.258461538462 00:15:51.903 } 00:15:51.903 ], 00:15:51.903 "core_count": 1 00:15:51.903 } 00:15:51.903 19:49:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:15:51.903 19:49:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:15:51.903 19:49:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:15:51.903 19:49:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:15:51.903 19:49:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:15:51.903 | select(.opcode=="crc32c") 00:15:51.903 | "\(.module_name) \(.executed)"' 00:15:52.161 19:49:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:15:52.161 19:49:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:15:52.161 19:49:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:15:52.161 19:49:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:52.161 19:49:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 78541 00:15:52.161 19:49:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 78541 ']' 00:15:52.161 19:49:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 78541 00:15:52.161 19:49:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:15:52.161 19:49:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:52.161 19:49:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78541 00:15:52.161 19:49:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:52.161 19:49:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:52.161 killing process with pid 78541 00:15:52.161 Received shutdown signal, test time was about 2.000000 seconds 00:15:52.161 00:15:52.161 Latency(us) 00:15:52.161 [2024-11-26T19:49:47.408Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:52.161 [2024-11-26T19:49:47.408Z] =================================================================================================================== 00:15:52.161 [2024-11-26T19:49:47.408Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:52.161 19:49:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78541' 00:15:52.161 19:49:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 78541 00:15:52.161 19:49:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 78541 00:15:52.161 19:49:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:15:52.161 19:49:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:15:52.161 19:49:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:15:52.161 19:49:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:15:52.161 19:49:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:15:52.161 19:49:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:15:52.161 19:49:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:15:52.161 19:49:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:15:52.161 19:49:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=78596 00:15:52.161 19:49:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 78596 /var/tmp/bperf.sock 00:15:52.161 19:49:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 78596 ']' 00:15:52.161 19:49:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:15:52.161 19:49:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:52.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:15:52.161 19:49:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:15:52.161 19:49:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:52.161 19:49:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:15:52.418 I/O size of 131072 is greater than zero copy threshold (65536). 00:15:52.418 Zero copy mechanism will not be used. 00:15:52.418 [2024-11-26 19:49:47.423495] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:15:52.418 [2024-11-26 19:49:47.423549] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78596 ] 00:15:52.418 [2024-11-26 19:49:47.556372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.418 [2024-11-26 19:49:47.586890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:53.353 19:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:53.353 19:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:15:53.353 19:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:15:53.353 19:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:15:53.353 19:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:15:53.353 [2024-11-26 19:49:48.506891] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:53.353 19:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:15:53.353 19:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:15:53.611 nvme0n1 00:15:53.611 19:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:15:53.611 19:49:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:15:53.868 I/O size of 131072 is greater than zero copy threshold (65536). 00:15:53.868 Zero copy mechanism will not be used. 00:15:53.869 Running I/O for 2 seconds... 00:15:55.735 11440.00 IOPS, 1430.00 MiB/s [2024-11-26T19:49:50.982Z] 11488.00 IOPS, 1436.00 MiB/s 00:15:55.735 Latency(us) 00:15:55.735 [2024-11-26T19:49:50.982Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:55.735 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:15:55.735 nvme0n1 : 2.00 11486.04 1435.75 0.00 0.00 1390.49 1310.72 7511.43 00:15:55.735 [2024-11-26T19:49:50.982Z] =================================================================================================================== 00:15:55.735 [2024-11-26T19:49:50.982Z] Total : 11486.04 1435.75 0.00 0.00 1390.49 1310.72 7511.43 00:15:55.735 { 00:15:55.735 "results": [ 00:15:55.735 { 00:15:55.735 "job": "nvme0n1", 00:15:55.735 "core_mask": "0x2", 00:15:55.735 "workload": "randread", 00:15:55.735 "status": "finished", 00:15:55.735 "queue_depth": 16, 00:15:55.735 "io_size": 131072, 00:15:55.735 "runtime": 2.001735, 00:15:55.735 "iops": 11486.035863888077, 00:15:55.735 "mibps": 1435.7544829860096, 00:15:55.735 "io_failed": 0, 00:15:55.735 "io_timeout": 0, 00:15:55.735 "avg_latency_us": 1390.4887832557145, 00:15:55.735 "min_latency_us": 1310.72, 00:15:55.735 "max_latency_us": 7511.433846153846 00:15:55.735 } 00:15:55.735 ], 00:15:55.735 "core_count": 1 00:15:55.735 } 00:15:55.735 19:49:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:15:55.735 19:49:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:15:55.735 19:49:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:15:55.735 | select(.opcode=="crc32c") 00:15:55.735 | "\(.module_name) \(.executed)"' 00:15:55.735 19:49:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:15:55.735 19:49:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:15:55.994 19:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:15:55.994 19:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:15:55.994 19:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:15:55.994 19:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:55.994 19:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 78596 00:15:55.994 19:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 78596 ']' 00:15:55.994 19:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 78596 00:15:55.994 19:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:15:55.994 19:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:55.994 19:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78596 00:15:55.994 19:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:55.994 killing process with pid 78596 00:15:55.994 Received shutdown signal, test time was about 2.000000 seconds 00:15:55.994 00:15:55.994 Latency(us) 00:15:55.994 [2024-11-26T19:49:51.241Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:55.994 [2024-11-26T19:49:51.241Z] =================================================================================================================== 00:15:55.994 [2024-11-26T19:49:51.241Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:55.994 19:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:55.994 19:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78596' 00:15:55.994 19:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 78596 00:15:55.994 19:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 78596 00:15:56.252 19:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:15:56.252 19:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:15:56.252 19:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:15:56.252 19:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:15:56.252 19:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:15:56.252 19:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:15:56.252 19:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:15:56.252 19:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=78655 00:15:56.252 19:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 78655 /var/tmp/bperf.sock 00:15:56.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:15:56.252 19:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 78655 ']' 00:15:56.252 19:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:15:56.252 19:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:56.252 19:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:15:56.252 19:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:56.252 19:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:15:56.252 19:49:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:15:56.252 [2024-11-26 19:49:51.285234] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:15:56.252 [2024-11-26 19:49:51.285296] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78655 ] 00:15:56.252 [2024-11-26 19:49:51.427584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.252 [2024-11-26 19:49:51.463024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:57.186 19:49:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:57.186 19:49:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:15:57.186 19:49:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:15:57.186 19:49:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:15:57.186 19:49:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:15:57.186 [2024-11-26 19:49:52.381933] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:57.186 19:49:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:15:57.186 19:49:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:15:57.444 nvme0n1 00:15:57.444 19:49:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:15:57.444 19:49:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:15:57.703 Running I/O for 2 seconds... 00:15:59.571 19561.00 IOPS, 76.41 MiB/s [2024-11-26T19:49:54.818Z] 20067.50 IOPS, 78.39 MiB/s 00:15:59.571 Latency(us) 00:15:59.571 [2024-11-26T19:49:54.818Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:59.571 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:59.572 nvme0n1 : 2.01 20057.03 78.35 0.00 0.00 6376.77 2167.73 14619.57 00:15:59.572 [2024-11-26T19:49:54.819Z] =================================================================================================================== 00:15:59.572 [2024-11-26T19:49:54.819Z] Total : 20057.03 78.35 0.00 0.00 6376.77 2167.73 14619.57 00:15:59.572 { 00:15:59.572 "results": [ 00:15:59.572 { 00:15:59.572 "job": "nvme0n1", 00:15:59.572 "core_mask": "0x2", 00:15:59.572 "workload": "randwrite", 00:15:59.572 "status": "finished", 00:15:59.572 "queue_depth": 128, 00:15:59.572 "io_size": 4096, 00:15:59.572 "runtime": 2.007426, 00:15:59.572 "iops": 20057.028254092555, 00:15:59.572 "mibps": 78.34776661754904, 00:15:59.572 "io_failed": 0, 00:15:59.572 "io_timeout": 0, 00:15:59.572 "avg_latency_us": 6376.7664320936, 00:15:59.572 "min_latency_us": 2167.729230769231, 00:15:59.572 "max_latency_us": 14619.569230769232 00:15:59.572 } 00:15:59.572 ], 00:15:59.572 "core_count": 1 00:15:59.572 } 00:15:59.572 19:49:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:15:59.572 19:49:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:15:59.572 19:49:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:15:59.572 | select(.opcode=="crc32c") 00:15:59.572 | "\(.module_name) \(.executed)"' 00:15:59.572 19:49:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:15:59.572 19:49:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:15:59.830 19:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:15:59.830 19:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:15:59.830 19:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:15:59.830 19:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:59.830 19:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 78655 00:15:59.830 19:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 78655 ']' 00:15:59.830 19:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 78655 00:15:59.830 19:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:15:59.830 19:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:59.830 19:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78655 00:15:59.830 killing process with pid 78655 00:15:59.830 Received shutdown signal, test time was about 2.000000 seconds 00:15:59.830 00:15:59.830 Latency(us) 00:15:59.830 [2024-11-26T19:49:55.077Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:59.830 [2024-11-26T19:49:55.077Z] =================================================================================================================== 00:15:59.830 [2024-11-26T19:49:55.077Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:59.830 19:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:59.830 19:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:59.830 19:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78655' 00:15:59.830 19:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 78655 00:15:59.830 19:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 78655 00:16:00.089 19:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:16:00.089 19:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:16:00.089 19:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:00.089 19:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:16:00.089 19:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:16:00.089 19:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:16:00.089 19:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:16:00.089 19:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=78706 00:16:00.089 19:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 78706 /var/tmp/bperf.sock 00:16:00.089 19:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 78706 ']' 00:16:00.089 19:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:16:00.089 19:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:00.089 19:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:00.089 19:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:00.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:00.089 19:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:00.089 19:49:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:00.089 [2024-11-26 19:49:55.172363] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:16:00.089 [2024-11-26 19:49:55.172550] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6I/O size of 131072 is greater than zero copy threshold (65536). 00:16:00.089 Zero copy mechanism will not be used. 00:16:00.089 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78706 ] 00:16:00.089 [2024-11-26 19:49:55.299328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.089 [2024-11-26 19:49:55.331817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:01.024 19:49:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:01.024 19:49:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:16:01.024 19:49:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:16:01.024 19:49:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:16:01.024 19:49:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:01.024 [2024-11-26 19:49:56.229569] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:01.024 19:49:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:01.024 19:49:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:01.294 nvme0n1 00:16:01.294 19:49:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:16:01.294 19:49:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:01.552 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:01.552 Zero copy mechanism will not be used. 00:16:01.552 Running I/O for 2 seconds... 00:16:03.418 10793.00 IOPS, 1349.12 MiB/s [2024-11-26T19:49:58.665Z] 10828.50 IOPS, 1353.56 MiB/s 00:16:03.418 Latency(us) 00:16:03.418 [2024-11-26T19:49:58.665Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:03.418 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:16:03.418 nvme0n1 : 2.00 10824.94 1353.12 0.00 0.00 1475.02 1064.96 6427.57 00:16:03.418 [2024-11-26T19:49:58.665Z] =================================================================================================================== 00:16:03.418 [2024-11-26T19:49:58.665Z] Total : 10824.94 1353.12 0.00 0.00 1475.02 1064.96 6427.57 00:16:03.418 { 00:16:03.418 "results": [ 00:16:03.418 { 00:16:03.418 "job": "nvme0n1", 00:16:03.418 "core_mask": "0x2", 00:16:03.418 "workload": "randwrite", 00:16:03.418 "status": "finished", 00:16:03.418 "queue_depth": 16, 00:16:03.418 "io_size": 131072, 00:16:03.418 "runtime": 2.002135, 00:16:03.418 "iops": 10824.944371883015, 00:16:03.418 "mibps": 1353.118046485377, 00:16:03.418 "io_failed": 0, 00:16:03.418 "io_timeout": 0, 00:16:03.418 "avg_latency_us": 1475.0233184856024, 00:16:03.418 "min_latency_us": 1064.96, 00:16:03.418 "max_latency_us": 6427.569230769231 00:16:03.418 } 00:16:03.418 ], 00:16:03.418 "core_count": 1 00:16:03.418 } 00:16:03.418 19:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:16:03.418 19:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:16:03.418 19:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:03.418 19:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:03.418 | select(.opcode=="crc32c") 00:16:03.418 | "\(.module_name) \(.executed)"' 00:16:03.418 19:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:03.677 19:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:16:03.677 19:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:16:03.677 19:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:16:03.677 19:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:03.677 19:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 78706 00:16:03.677 19:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 78706 ']' 00:16:03.677 19:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 78706 00:16:03.677 19:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:16:03.677 19:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:03.678 19:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78706 00:16:03.678 killing process with pid 78706 00:16:03.678 Received shutdown signal, test time was about 2.000000 seconds 00:16:03.678 00:16:03.678 Latency(us) 00:16:03.678 [2024-11-26T19:49:58.925Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:03.678 [2024-11-26T19:49:58.925Z] =================================================================================================================== 00:16:03.678 [2024-11-26T19:49:58.925Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:03.678 19:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:03.678 19:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:03.678 19:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78706' 00:16:03.678 19:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 78706 00:16:03.678 19:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 78706 00:16:03.936 19:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 78509 00:16:03.936 19:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 78509 ']' 00:16:03.936 19:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 78509 00:16:03.936 19:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:16:03.936 19:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:03.936 19:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78509 00:16:03.936 killing process with pid 78509 00:16:03.936 19:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:03.936 19:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:03.936 19:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78509' 00:16:03.936 19:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 78509 00:16:03.936 19:49:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 78509 00:16:03.936 00:16:03.936 real 0m16.635s 00:16:03.936 user 0m32.399s 00:16:03.936 sys 0m3.576s 00:16:03.936 19:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:03.936 ************************************ 00:16:03.936 END TEST nvmf_digest_clean 00:16:03.936 ************************************ 00:16:03.936 19:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:03.936 19:49:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:16:03.936 19:49:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:03.936 19:49:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:03.936 19:49:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:16:03.936 ************************************ 00:16:03.936 START TEST nvmf_digest_error 00:16:03.936 ************************************ 00:16:03.936 19:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:16:03.936 19:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:16:03.936 19:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:03.936 19:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:03.936 19:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:03.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:03.936 19:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=78790 00:16:03.936 19:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 78790 00:16:03.936 19:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 78790 ']' 00:16:03.936 19:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:03.936 19:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:03.936 19:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:03.936 19:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:03.936 19:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:03.936 19:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:16:03.936 [2024-11-26 19:49:59.147547] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:16:03.936 [2024-11-26 19:49:59.147604] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:04.193 [2024-11-26 19:49:59.275648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:04.193 [2024-11-26 19:49:59.304433] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:04.193 [2024-11-26 19:49:59.304472] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:04.193 [2024-11-26 19:49:59.304478] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:04.193 [2024-11-26 19:49:59.304482] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:04.193 [2024-11-26 19:49:59.304486] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:04.193 [2024-11-26 19:49:59.304691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.757 19:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:04.757 19:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:16:04.757 19:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:04.757 19:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:04.757 19:49:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:05.014 19:50:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:05.014 19:50:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:16:05.014 19:50:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.014 19:50:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:05.014 [2024-11-26 19:50:00.008996] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:16:05.015 19:50:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.015 19:50:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:16:05.015 19:50:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:16:05.015 19:50:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.015 19:50:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:05.015 [2024-11-26 19:50:00.044676] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:05.015 null0 00:16:05.015 [2024-11-26 19:50:00.083098] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:05.015 [2024-11-26 19:50:00.107166] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:05.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:05.015 19:50:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.015 19:50:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:16:05.015 19:50:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:16:05.015 19:50:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:16:05.015 19:50:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:16:05.015 19:50:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:16:05.015 19:50:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=78822 00:16:05.015 19:50:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 78822 /var/tmp/bperf.sock 00:16:05.015 19:50:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 78822 ']' 00:16:05.015 19:50:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:16:05.015 19:50:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:05.015 19:50:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:05.015 19:50:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:05.015 19:50:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:05.015 19:50:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:05.015 [2024-11-26 19:50:00.145421] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:16:05.015 [2024-11-26 19:50:00.145587] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78822 ] 00:16:05.272 [2024-11-26 19:50:00.284201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:05.272 [2024-11-26 19:50:00.319281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:05.272 [2024-11-26 19:50:00.349453] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:05.896 19:50:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:05.896 19:50:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:16:05.896 19:50:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:05.896 19:50:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:05.896 19:50:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:05.896 19:50:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.896 19:50:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:06.154 19:50:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.154 19:50:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:06.154 19:50:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:06.411 nvme0n1 00:16:06.411 19:50:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:16:06.411 19:50:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.411 19:50:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:06.411 19:50:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.411 19:50:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:06.411 19:50:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:06.411 Running I/O for 2 seconds... 00:16:06.411 [2024-11-26 19:50:01.534924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:06.411 [2024-11-26 19:50:01.534966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.411 [2024-11-26 19:50:01.534976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:06.411 [2024-11-26 19:50:01.549622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:06.411 [2024-11-26 19:50:01.549652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.411 [2024-11-26 19:50:01.549661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:06.411 [2024-11-26 19:50:01.564309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:06.411 [2024-11-26 19:50:01.564337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.412 [2024-11-26 19:50:01.564344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:06.412 [2024-11-26 19:50:01.578975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:06.412 [2024-11-26 19:50:01.579002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.412 [2024-11-26 19:50:01.579010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:06.412 [2024-11-26 19:50:01.593638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:06.412 [2024-11-26 19:50:01.593665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.412 [2024-11-26 19:50:01.593673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:06.412 [2024-11-26 19:50:01.608311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:06.412 [2024-11-26 19:50:01.608339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.412 [2024-11-26 19:50:01.608346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:06.412 [2024-11-26 19:50:01.622978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:06.412 [2024-11-26 19:50:01.623005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.412 [2024-11-26 19:50:01.623012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:06.412 [2024-11-26 19:50:01.637634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:06.412 [2024-11-26 19:50:01.637761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.412 [2024-11-26 19:50:01.637786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:06.412 [2024-11-26 19:50:01.652412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:06.412 [2024-11-26 19:50:01.652519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.412 [2024-11-26 19:50:01.652529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:06.669 [2024-11-26 19:50:01.667184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:06.669 [2024-11-26 19:50:01.667290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.670 [2024-11-26 19:50:01.667342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:06.670 [2024-11-26 19:50:01.681994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:06.670 [2024-11-26 19:50:01.682098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.670 [2024-11-26 19:50:01.682152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:06.670 [2024-11-26 19:50:01.696849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:06.670 [2024-11-26 19:50:01.696952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:10901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.670 [2024-11-26 19:50:01.697004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:06.670 [2024-11-26 19:50:01.711682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:06.670 [2024-11-26 19:50:01.711802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.670 [2024-11-26 19:50:01.711857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:06.670 [2024-11-26 19:50:01.726520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:06.670 [2024-11-26 19:50:01.726624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.670 [2024-11-26 19:50:01.726675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:06.670 [2024-11-26 19:50:01.741311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:06.670 [2024-11-26 19:50:01.741414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.670 [2024-11-26 19:50:01.741469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:06.670 [2024-11-26 19:50:01.756143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:06.670 [2024-11-26 19:50:01.756250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.670 [2024-11-26 19:50:01.756304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:06.670 [2024-11-26 19:50:01.770959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:06.670 [2024-11-26 19:50:01.771076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.670 [2024-11-26 19:50:01.771128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:06.670 [2024-11-26 19:50:01.785814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:06.670 [2024-11-26 19:50:01.785937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.670 [2024-11-26 19:50:01.785990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:06.670 [2024-11-26 19:50:01.800775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:06.670 [2024-11-26 19:50:01.800907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.670 [2024-11-26 19:50:01.800959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:06.670 [2024-11-26 19:50:01.815643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:06.670 [2024-11-26 19:50:01.815758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.670 [2024-11-26 19:50:01.815830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:06.670 [2024-11-26 19:50:01.830498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:06.670 [2024-11-26 19:50:01.830614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.670 [2024-11-26 19:50:01.830699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:06.670 [2024-11-26 19:50:01.845390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:06.670 [2024-11-26 19:50:01.845511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.670 [2024-11-26 19:50:01.845563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:06.670 [2024-11-26 19:50:01.860258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:06.670 [2024-11-26 19:50:01.860378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.670 [2024-11-26 19:50:01.860430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:06.670 [2024-11-26 19:50:01.875132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:06.670 [2024-11-26 19:50:01.875246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.670 [2024-11-26 19:50:01.875298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:06.670 [2024-11-26 19:50:01.889974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:06.670 [2024-11-26 19:50:01.890092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.670 [2024-11-26 19:50:01.890144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:06.670 [2024-11-26 19:50:01.904826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:06.670 [2024-11-26 19:50:01.904942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.670 [2024-11-26 19:50:01.904993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:06.929 [2024-11-26 19:50:01.919693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:06.929 [2024-11-26 19:50:01.919812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.929 [2024-11-26 19:50:01.919875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:06.929 [2024-11-26 19:50:01.934548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:06.929 [2024-11-26 19:50:01.934667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.929 [2024-11-26 19:50:01.934839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:06.929 [2024-11-26 19:50:01.949626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:06.929 [2024-11-26 19:50:01.949737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.929 [2024-11-26 19:50:01.949809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:06.929 [2024-11-26 19:50:01.964552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:06.929 [2024-11-26 19:50:01.964669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.929 [2024-11-26 19:50:01.964724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:06.929 [2024-11-26 19:50:01.979580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:06.929 [2024-11-26 19:50:01.979688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.929 [2024-11-26 19:50:01.979698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:06.929 [2024-11-26 19:50:01.994399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:06.929 [2024-11-26 19:50:01.994506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.929 [2024-11-26 19:50:01.994516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:06.929 [2024-11-26 19:50:02.009218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:06.929 [2024-11-26 19:50:02.009325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.929 [2024-11-26 19:50:02.009335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:06.929 [2024-11-26 19:50:02.024005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:06.929 [2024-11-26 19:50:02.024109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.929 [2024-11-26 19:50:02.024119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:06.929 [2024-11-26 19:50:02.038782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:06.929 [2024-11-26 19:50:02.038815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.929 [2024-11-26 19:50:02.038822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:06.929 [2024-11-26 19:50:02.053544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:06.929 [2024-11-26 19:50:02.053580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.929 [2024-11-26 19:50:02.053589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:06.929 [2024-11-26 19:50:02.068265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:06.929 [2024-11-26 19:50:02.068298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.929 [2024-11-26 19:50:02.068306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:06.929 [2024-11-26 19:50:02.082973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:06.929 [2024-11-26 19:50:02.083009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.929 [2024-11-26 19:50:02.083017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:06.929 [2024-11-26 19:50:02.097689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:06.929 [2024-11-26 19:50:02.097727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.929 [2024-11-26 19:50:02.097735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:06.929 [2024-11-26 19:50:02.112404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:06.929 [2024-11-26 19:50:02.112438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.929 [2024-11-26 19:50:02.112445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:06.929 [2024-11-26 19:50:02.127103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:06.929 [2024-11-26 19:50:02.127135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:17663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.929 [2024-11-26 19:50:02.127143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:06.929 [2024-11-26 19:50:02.141796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:06.929 [2024-11-26 19:50:02.141919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.929 [2024-11-26 19:50:02.141930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:06.929 [2024-11-26 19:50:02.156664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:06.929 [2024-11-26 19:50:02.156702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.929 [2024-11-26 19:50:02.156711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:06.929 [2024-11-26 19:50:02.171407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:06.929 [2024-11-26 19:50:02.171444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:17268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.929 [2024-11-26 19:50:02.171451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.188 [2024-11-26 19:50:02.186133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.188 [2024-11-26 19:50:02.186174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.188 [2024-11-26 19:50:02.186183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.188 [2024-11-26 19:50:02.200840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.188 [2024-11-26 19:50:02.200967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.188 [2024-11-26 19:50:02.200977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.188 [2024-11-26 19:50:02.215648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.188 [2024-11-26 19:50:02.215682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:13449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.188 [2024-11-26 19:50:02.215689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.188 [2024-11-26 19:50:02.230338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.188 [2024-11-26 19:50:02.230371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.188 [2024-11-26 19:50:02.230379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.188 [2024-11-26 19:50:02.245023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.188 [2024-11-26 19:50:02.245069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.188 [2024-11-26 19:50:02.245078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.188 [2024-11-26 19:50:02.259728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.188 [2024-11-26 19:50:02.259760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:17995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.188 [2024-11-26 19:50:02.259781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.188 [2024-11-26 19:50:02.274422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.188 [2024-11-26 19:50:02.274456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.188 [2024-11-26 19:50:02.274464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.188 [2024-11-26 19:50:02.289128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.188 [2024-11-26 19:50:02.289162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.188 [2024-11-26 19:50:02.289169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.188 [2024-11-26 19:50:02.303828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.188 [2024-11-26 19:50:02.303950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.188 [2024-11-26 19:50:02.303959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.188 [2024-11-26 19:50:02.318583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.188 [2024-11-26 19:50:02.318615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.188 [2024-11-26 19:50:02.318622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.188 [2024-11-26 19:50:02.333284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.188 [2024-11-26 19:50:02.333317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.188 [2024-11-26 19:50:02.333324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.188 [2024-11-26 19:50:02.347981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.188 [2024-11-26 19:50:02.348024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.188 [2024-11-26 19:50:02.348031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.188 [2024-11-26 19:50:02.362650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.188 [2024-11-26 19:50:02.362681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.188 [2024-11-26 19:50:02.362689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.188 [2024-11-26 19:50:02.377294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.188 [2024-11-26 19:50:02.377324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:13817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.188 [2024-11-26 19:50:02.377331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.188 [2024-11-26 19:50:02.391964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.188 [2024-11-26 19:50:02.391995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.188 [2024-11-26 19:50:02.392002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.188 [2024-11-26 19:50:02.406549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.188 [2024-11-26 19:50:02.406579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.188 [2024-11-26 19:50:02.406587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.188 [2024-11-26 19:50:02.421249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.188 [2024-11-26 19:50:02.421280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.188 [2024-11-26 19:50:02.421288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.447 [2024-11-26 19:50:02.435885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.447 [2024-11-26 19:50:02.435914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.447 [2024-11-26 19:50:02.435922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.447 [2024-11-26 19:50:02.450549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.447 [2024-11-26 19:50:02.450578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.447 [2024-11-26 19:50:02.450586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.447 [2024-11-26 19:50:02.471547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.447 [2024-11-26 19:50:02.471575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.447 [2024-11-26 19:50:02.471582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.447 [2024-11-26 19:50:02.486229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.447 [2024-11-26 19:50:02.486348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.447 [2024-11-26 19:50:02.486358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.447 [2024-11-26 19:50:02.501035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.447 [2024-11-26 19:50:02.501144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.447 [2024-11-26 19:50:02.501153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.447 16952.00 IOPS, 66.22 MiB/s [2024-11-26T19:50:02.694Z] [2024-11-26 19:50:02.517288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.447 [2024-11-26 19:50:02.517317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.447 [2024-11-26 19:50:02.517326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.447 [2024-11-26 19:50:02.531981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.447 [2024-11-26 19:50:02.532093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:6192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.447 [2024-11-26 19:50:02.532103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.447 [2024-11-26 19:50:02.546860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.447 [2024-11-26 19:50:02.546976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.447 [2024-11-26 19:50:02.546985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.447 [2024-11-26 19:50:02.561668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.447 [2024-11-26 19:50:02.561700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.447 [2024-11-26 19:50:02.561707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.447 [2024-11-26 19:50:02.576392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.447 [2024-11-26 19:50:02.576424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.447 [2024-11-26 19:50:02.576431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.447 [2024-11-26 19:50:02.591159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.447 [2024-11-26 19:50:02.591191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.447 [2024-11-26 19:50:02.591199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.447 [2024-11-26 19:50:02.605787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.447 [2024-11-26 19:50:02.605817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.447 [2024-11-26 19:50:02.605825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.447 [2024-11-26 19:50:02.618791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.447 [2024-11-26 19:50:02.618816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.447 [2024-11-26 19:50:02.618822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.447 [2024-11-26 19:50:02.631611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.447 [2024-11-26 19:50:02.631636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:8996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.447 [2024-11-26 19:50:02.631642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.447 [2024-11-26 19:50:02.644550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.447 [2024-11-26 19:50:02.644578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.447 [2024-11-26 19:50:02.644584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.447 [2024-11-26 19:50:02.657583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.447 [2024-11-26 19:50:02.657688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.447 [2024-11-26 19:50:02.657696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.447 [2024-11-26 19:50:02.670722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.447 [2024-11-26 19:50:02.670749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.447 [2024-11-26 19:50:02.670756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.447 [2024-11-26 19:50:02.683761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.447 [2024-11-26 19:50:02.683793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.447 [2024-11-26 19:50:02.683800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.706 [2024-11-26 19:50:02.696794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.706 [2024-11-26 19:50:02.696819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.706 [2024-11-26 19:50:02.696825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.706 [2024-11-26 19:50:02.709850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.706 [2024-11-26 19:50:02.709876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.706 [2024-11-26 19:50:02.709882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.706 [2024-11-26 19:50:02.722646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.706 [2024-11-26 19:50:02.722672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.706 [2024-11-26 19:50:02.722678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.706 [2024-11-26 19:50:02.735666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.706 [2024-11-26 19:50:02.735692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.706 [2024-11-26 19:50:02.735698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.706 [2024-11-26 19:50:02.748709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.706 [2024-11-26 19:50:02.748734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.706 [2024-11-26 19:50:02.748741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.706 [2024-11-26 19:50:02.761757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.706 [2024-11-26 19:50:02.761790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.706 [2024-11-26 19:50:02.761796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.706 [2024-11-26 19:50:02.774911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.706 [2024-11-26 19:50:02.774935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.706 [2024-11-26 19:50:02.774941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.706 [2024-11-26 19:50:02.787939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.706 [2024-11-26 19:50:02.787964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.706 [2024-11-26 19:50:02.787970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.706 [2024-11-26 19:50:02.800904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.706 [2024-11-26 19:50:02.801004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.706 [2024-11-26 19:50:02.801012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.706 [2024-11-26 19:50:02.814095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.706 [2024-11-26 19:50:02.814122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:14975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.706 [2024-11-26 19:50:02.814128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.706 [2024-11-26 19:50:02.827152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.706 [2024-11-26 19:50:02.827179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.706 [2024-11-26 19:50:02.827185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.706 [2024-11-26 19:50:02.840181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.706 [2024-11-26 19:50:02.840282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.706 [2024-11-26 19:50:02.840290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.706 [2024-11-26 19:50:02.853316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.706 [2024-11-26 19:50:02.853344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.706 [2024-11-26 19:50:02.853349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.706 [2024-11-26 19:50:02.866379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.706 [2024-11-26 19:50:02.866405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:24120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.706 [2024-11-26 19:50:02.866411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.706 [2024-11-26 19:50:02.879417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.706 [2024-11-26 19:50:02.879511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.706 [2024-11-26 19:50:02.879519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.706 [2024-11-26 19:50:02.892541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.706 [2024-11-26 19:50:02.892567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.706 [2024-11-26 19:50:02.892573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.706 [2024-11-26 19:50:02.905592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.706 [2024-11-26 19:50:02.905619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.706 [2024-11-26 19:50:02.905625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.706 [2024-11-26 19:50:02.918664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.706 [2024-11-26 19:50:02.918690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.706 [2024-11-26 19:50:02.918696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.706 [2024-11-26 19:50:02.931733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.706 [2024-11-26 19:50:02.931759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.706 [2024-11-26 19:50:02.931776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.706 [2024-11-26 19:50:02.944745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.707 [2024-11-26 19:50:02.944780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:73 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.707 [2024-11-26 19:50:02.944787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.966 [2024-11-26 19:50:02.957776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.966 [2024-11-26 19:50:02.957798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:14918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.966 [2024-11-26 19:50:02.957804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.966 [2024-11-26 19:50:02.970795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.966 [2024-11-26 19:50:02.970817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.966 [2024-11-26 19:50:02.970822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.966 [2024-11-26 19:50:02.983848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.966 [2024-11-26 19:50:02.983871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.966 [2024-11-26 19:50:02.983876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.966 [2024-11-26 19:50:02.996865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.966 [2024-11-26 19:50:02.996956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:18268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.966 [2024-11-26 19:50:02.996964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.966 [2024-11-26 19:50:03.009967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.966 [2024-11-26 19:50:03.009991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.966 [2024-11-26 19:50:03.009997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.966 [2024-11-26 19:50:03.022945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.966 [2024-11-26 19:50:03.022967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.966 [2024-11-26 19:50:03.022973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.966 [2024-11-26 19:50:03.035967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.966 [2024-11-26 19:50:03.036054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.966 [2024-11-26 19:50:03.036062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.966 [2024-11-26 19:50:03.049077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.966 [2024-11-26 19:50:03.049100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.966 [2024-11-26 19:50:03.049106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.966 [2024-11-26 19:50:03.062109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.966 [2024-11-26 19:50:03.062131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.966 [2024-11-26 19:50:03.062137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.966 [2024-11-26 19:50:03.075151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.966 [2024-11-26 19:50:03.075169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.966 [2024-11-26 19:50:03.075175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.966 [2024-11-26 19:50:03.094951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.966 [2024-11-26 19:50:03.095066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.966 [2024-11-26 19:50:03.095097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.966 [2024-11-26 19:50:03.111618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.966 [2024-11-26 19:50:03.111646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.966 [2024-11-26 19:50:03.111655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.966 [2024-11-26 19:50:03.126265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.966 [2024-11-26 19:50:03.126289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.966 [2024-11-26 19:50:03.126298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.966 [2024-11-26 19:50:03.140852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.966 [2024-11-26 19:50:03.140875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.966 [2024-11-26 19:50:03.140882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.966 [2024-11-26 19:50:03.155517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.966 [2024-11-26 19:50:03.155540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.966 [2024-11-26 19:50:03.155548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.966 [2024-11-26 19:50:03.170090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.966 [2024-11-26 19:50:03.170113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.966 [2024-11-26 19:50:03.170120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.966 [2024-11-26 19:50:03.184738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.966 [2024-11-26 19:50:03.184761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.966 [2024-11-26 19:50:03.184776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:07.966 [2024-11-26 19:50:03.199312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:07.966 [2024-11-26 19:50:03.199335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:8327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.966 [2024-11-26 19:50:03.199343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:08.224 [2024-11-26 19:50:03.213974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:08.224 [2024-11-26 19:50:03.213998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.224 [2024-11-26 19:50:03.214005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:08.224 [2024-11-26 19:50:03.228620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:08.224 [2024-11-26 19:50:03.228643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.224 [2024-11-26 19:50:03.228650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:08.224 [2024-11-26 19:50:03.243285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:08.224 [2024-11-26 19:50:03.243310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.224 [2024-11-26 19:50:03.243317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:08.224 [2024-11-26 19:50:03.257863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:08.224 [2024-11-26 19:50:03.257888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.224 [2024-11-26 19:50:03.257896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:08.225 [2024-11-26 19:50:03.272445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:08.225 [2024-11-26 19:50:03.272469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.225 [2024-11-26 19:50:03.272476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:08.225 [2024-11-26 19:50:03.287162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:08.225 [2024-11-26 19:50:03.287189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.225 [2024-11-26 19:50:03.287196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:08.225 [2024-11-26 19:50:03.301788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:08.225 [2024-11-26 19:50:03.301812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.225 [2024-11-26 19:50:03.301819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:08.225 [2024-11-26 19:50:03.316390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:08.225 [2024-11-26 19:50:03.316415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.225 [2024-11-26 19:50:03.316423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:08.225 [2024-11-26 19:50:03.331023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:08.225 [2024-11-26 19:50:03.331047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.225 [2024-11-26 19:50:03.331061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:08.225 [2024-11-26 19:50:03.345619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:08.225 [2024-11-26 19:50:03.345641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.225 [2024-11-26 19:50:03.345648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:08.225 [2024-11-26 19:50:03.366505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:08.225 [2024-11-26 19:50:03.366528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.225 [2024-11-26 19:50:03.366536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:08.225 [2024-11-26 19:50:03.381051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:08.225 [2024-11-26 19:50:03.381074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.225 [2024-11-26 19:50:03.381082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:08.225 [2024-11-26 19:50:03.395838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:08.225 [2024-11-26 19:50:03.395864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.225 [2024-11-26 19:50:03.395871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:08.225 [2024-11-26 19:50:03.410624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:08.225 [2024-11-26 19:50:03.410647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.225 [2024-11-26 19:50:03.410654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:08.225 [2024-11-26 19:50:03.425372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:08.225 [2024-11-26 19:50:03.425395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.225 [2024-11-26 19:50:03.425401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:08.225 [2024-11-26 19:50:03.440065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:08.225 [2024-11-26 19:50:03.440087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.225 [2024-11-26 19:50:03.440094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:08.225 [2024-11-26 19:50:03.454833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:08.225 [2024-11-26 19:50:03.454854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.225 [2024-11-26 19:50:03.454861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:08.225 [2024-11-26 19:50:03.469402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:08.225 [2024-11-26 19:50:03.469425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.225 [2024-11-26 19:50:03.469432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:08.484 [2024-11-26 19:50:03.484177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:08.484 [2024-11-26 19:50:03.484200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.484 [2024-11-26 19:50:03.484207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:08.484 [2024-11-26 19:50:03.498972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:08.484 [2024-11-26 19:50:03.498997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.484 [2024-11-26 19:50:03.499004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:08.484 17584.50 IOPS, 68.69 MiB/s [2024-11-26T19:50:03.731Z] [2024-11-26 19:50:03.513465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd20050) 00:16:08.484 [2024-11-26 19:50:03.513488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:13253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.484 [2024-11-26 19:50:03.513496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:08.484 00:16:08.484 Latency(us) 00:16:08.484 [2024-11-26T19:50:03.731Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:08.484 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:16:08.484 nvme0n1 : 2.01 17588.12 68.70 0.00 0.00 7272.51 6276.33 29239.14 00:16:08.484 [2024-11-26T19:50:03.731Z] =================================================================================================================== 00:16:08.484 [2024-11-26T19:50:03.731Z] Total : 17588.12 68.70 0.00 0.00 7272.51 6276.33 29239.14 00:16:08.484 { 00:16:08.484 "results": [ 00:16:08.484 { 00:16:08.484 "job": "nvme0n1", 00:16:08.484 "core_mask": "0x2", 00:16:08.484 "workload": "randread", 00:16:08.484 "status": "finished", 00:16:08.484 "queue_depth": 128, 00:16:08.484 "io_size": 4096, 00:16:08.484 "runtime": 2.006866, 00:16:08.484 "iops": 17588.119984094603, 00:16:08.484 "mibps": 68.70359368786954, 00:16:08.484 "io_failed": 0, 00:16:08.484 "io_timeout": 0, 00:16:08.484 "avg_latency_us": 7272.505972832733, 00:16:08.484 "min_latency_us": 6276.332307692308, 00:16:08.484 "max_latency_us": 29239.138461538463 00:16:08.484 } 00:16:08.484 ], 00:16:08.484 "core_count": 1 00:16:08.484 } 00:16:08.484 19:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:08.484 19:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:08.484 19:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:08.484 | .driver_specific 00:16:08.484 | .nvme_error 00:16:08.484 | .status_code 00:16:08.484 | .command_transient_transport_error' 00:16:08.484 19:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:08.484 19:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 138 > 0 )) 00:16:08.484 19:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 78822 00:16:08.484 19:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 78822 ']' 00:16:08.484 19:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 78822 00:16:08.484 19:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:16:08.484 19:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:08.484 19:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78822 00:16:08.484 killing process with pid 78822 00:16:08.484 Received shutdown signal, test time was about 2.000000 seconds 00:16:08.484 00:16:08.484 Latency(us) 00:16:08.484 [2024-11-26T19:50:03.731Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:08.484 [2024-11-26T19:50:03.731Z] =================================================================================================================== 00:16:08.484 [2024-11-26T19:50:03.731Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:08.484 19:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:08.484 19:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:08.484 19:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78822' 00:16:08.484 19:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 78822 00:16:08.484 19:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 78822 00:16:08.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:08.742 19:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:16:08.742 19:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:16:08.742 19:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:16:08.742 19:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:16:08.742 19:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:16:08.742 19:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=78877 00:16:08.742 19:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 78877 /var/tmp/bperf.sock 00:16:08.742 19:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 78877 ']' 00:16:08.742 19:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:08.743 19:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:08.743 19:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:16:08.743 19:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:08.743 19:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:08.743 19:50:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:08.743 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:08.743 Zero copy mechanism will not be used. 00:16:08.743 [2024-11-26 19:50:03.898425] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:16:08.743 [2024-11-26 19:50:03.898484] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78877 ] 00:16:09.000 [2024-11-26 19:50:04.031228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.000 [2024-11-26 19:50:04.073323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:09.000 [2024-11-26 19:50:04.114134] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:09.566 19:50:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:09.566 19:50:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:16:09.566 19:50:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:09.566 19:50:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:09.824 19:50:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:09.824 19:50:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.824 19:50:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:09.824 19:50:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.824 19:50:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:09.824 19:50:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:10.083 nvme0n1 00:16:10.083 19:50:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:16:10.083 19:50:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.083 19:50:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:10.083 19:50:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.083 19:50:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:10.083 19:50:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:10.344 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:10.344 Zero copy mechanism will not be used. 00:16:10.344 Running I/O for 2 seconds... 00:16:10.344 [2024-11-26 19:50:05.372365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.344 [2024-11-26 19:50:05.372418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.344 [2024-11-26 19:50:05.372427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.344 [2024-11-26 19:50:05.375720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.344 [2024-11-26 19:50:05.375749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.344 [2024-11-26 19:50:05.375756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.344 [2024-11-26 19:50:05.378921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.344 [2024-11-26 19:50:05.378948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.344 [2024-11-26 19:50:05.378954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.344 [2024-11-26 19:50:05.382150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.344 [2024-11-26 19:50:05.382176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.344 [2024-11-26 19:50:05.382183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.344 [2024-11-26 19:50:05.385341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.344 [2024-11-26 19:50:05.385368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.344 [2024-11-26 19:50:05.385374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.344 [2024-11-26 19:50:05.388563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.344 [2024-11-26 19:50:05.388589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.344 [2024-11-26 19:50:05.388596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.344 [2024-11-26 19:50:05.391816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.344 [2024-11-26 19:50:05.391841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.344 [2024-11-26 19:50:05.391847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.344 [2024-11-26 19:50:05.394992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.344 [2024-11-26 19:50:05.395017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.344 [2024-11-26 19:50:05.395023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.344 [2024-11-26 19:50:05.398102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.344 [2024-11-26 19:50:05.398127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.344 [2024-11-26 19:50:05.398133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.344 [2024-11-26 19:50:05.401268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.344 [2024-11-26 19:50:05.401293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.345 [2024-11-26 19:50:05.401299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.345 [2024-11-26 19:50:05.404445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.345 [2024-11-26 19:50:05.404470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.345 [2024-11-26 19:50:05.404476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.345 [2024-11-26 19:50:05.407626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.345 [2024-11-26 19:50:05.407652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.345 [2024-11-26 19:50:05.407658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.345 [2024-11-26 19:50:05.410750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.345 [2024-11-26 19:50:05.410787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.345 [2024-11-26 19:50:05.410794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.345 [2024-11-26 19:50:05.413946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.345 [2024-11-26 19:50:05.413970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.345 [2024-11-26 19:50:05.413976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.345 [2024-11-26 19:50:05.417103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.345 [2024-11-26 19:50:05.417128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.345 [2024-11-26 19:50:05.417135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.345 [2024-11-26 19:50:05.420258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.345 [2024-11-26 19:50:05.420283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.345 [2024-11-26 19:50:05.420289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.345 [2024-11-26 19:50:05.423443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.345 [2024-11-26 19:50:05.423468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.345 [2024-11-26 19:50:05.423474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.345 [2024-11-26 19:50:05.426596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.345 [2024-11-26 19:50:05.426622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.345 [2024-11-26 19:50:05.426628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.345 [2024-11-26 19:50:05.429756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.345 [2024-11-26 19:50:05.429790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.345 [2024-11-26 19:50:05.429795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.345 [2024-11-26 19:50:05.432988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.345 [2024-11-26 19:50:05.433014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.345 [2024-11-26 19:50:05.433019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.345 [2024-11-26 19:50:05.436220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.345 [2024-11-26 19:50:05.436245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.345 [2024-11-26 19:50:05.436252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.345 [2024-11-26 19:50:05.439418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.345 [2024-11-26 19:50:05.439443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.345 [2024-11-26 19:50:05.439449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.345 [2024-11-26 19:50:05.442608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.345 [2024-11-26 19:50:05.442636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.345 [2024-11-26 19:50:05.442642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.345 [2024-11-26 19:50:05.445781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.345 [2024-11-26 19:50:05.445806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.345 [2024-11-26 19:50:05.445812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.345 [2024-11-26 19:50:05.448962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.345 [2024-11-26 19:50:05.448987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.345 [2024-11-26 19:50:05.448994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.345 [2024-11-26 19:50:05.452191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.345 [2024-11-26 19:50:05.452217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.345 [2024-11-26 19:50:05.452223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.345 [2024-11-26 19:50:05.455354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.345 [2024-11-26 19:50:05.455379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.345 [2024-11-26 19:50:05.455385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.345 [2024-11-26 19:50:05.458546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.345 [2024-11-26 19:50:05.458572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.345 [2024-11-26 19:50:05.458578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.345 [2024-11-26 19:50:05.461719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.345 [2024-11-26 19:50:05.461745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.345 [2024-11-26 19:50:05.461751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.345 [2024-11-26 19:50:05.464939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.345 [2024-11-26 19:50:05.464964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.345 [2024-11-26 19:50:05.464970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.345 [2024-11-26 19:50:05.468091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.345 [2024-11-26 19:50:05.468117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.345 [2024-11-26 19:50:05.468123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.345 [2024-11-26 19:50:05.471248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.345 [2024-11-26 19:50:05.471273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.345 [2024-11-26 19:50:05.471279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.345 [2024-11-26 19:50:05.474391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.345 [2024-11-26 19:50:05.474416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.345 [2024-11-26 19:50:05.474422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.345 [2024-11-26 19:50:05.477574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.345 [2024-11-26 19:50:05.477599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.345 [2024-11-26 19:50:05.477605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.345 [2024-11-26 19:50:05.480740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.345 [2024-11-26 19:50:05.480776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.345 [2024-11-26 19:50:05.480782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.345 [2024-11-26 19:50:05.483876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.345 [2024-11-26 19:50:05.483901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.345 [2024-11-26 19:50:05.483907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.345 [2024-11-26 19:50:05.487040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.345 [2024-11-26 19:50:05.487080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.346 [2024-11-26 19:50:05.487086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.346 [2024-11-26 19:50:05.490215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.346 [2024-11-26 19:50:05.490241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.346 [2024-11-26 19:50:05.490247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.346 [2024-11-26 19:50:05.493434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.346 [2024-11-26 19:50:05.493459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.346 [2024-11-26 19:50:05.493465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.346 [2024-11-26 19:50:05.496629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.346 [2024-11-26 19:50:05.496654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.346 [2024-11-26 19:50:05.496660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.346 [2024-11-26 19:50:05.499810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.346 [2024-11-26 19:50:05.499834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.346 [2024-11-26 19:50:05.499840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.346 [2024-11-26 19:50:05.502981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.346 [2024-11-26 19:50:05.503006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.346 [2024-11-26 19:50:05.503012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.346 [2024-11-26 19:50:05.506137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.346 [2024-11-26 19:50:05.506163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.346 [2024-11-26 19:50:05.506169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.346 [2024-11-26 19:50:05.509285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.346 [2024-11-26 19:50:05.509311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.346 [2024-11-26 19:50:05.509317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.346 [2024-11-26 19:50:05.512441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.346 [2024-11-26 19:50:05.512469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.346 [2024-11-26 19:50:05.512475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.346 [2024-11-26 19:50:05.515628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.346 [2024-11-26 19:50:05.515653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.346 [2024-11-26 19:50:05.515660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.346 [2024-11-26 19:50:05.518862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.346 [2024-11-26 19:50:05.518887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.346 [2024-11-26 19:50:05.518893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.346 [2024-11-26 19:50:05.522006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.346 [2024-11-26 19:50:05.522032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.346 [2024-11-26 19:50:05.522038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.346 [2024-11-26 19:50:05.525188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.346 [2024-11-26 19:50:05.525214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.346 [2024-11-26 19:50:05.525220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.346 [2024-11-26 19:50:05.528397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.346 [2024-11-26 19:50:05.528422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.346 [2024-11-26 19:50:05.528428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.346 [2024-11-26 19:50:05.531583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.346 [2024-11-26 19:50:05.531609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.346 [2024-11-26 19:50:05.531615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.346 [2024-11-26 19:50:05.534783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.346 [2024-11-26 19:50:05.534808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.346 [2024-11-26 19:50:05.534814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.346 [2024-11-26 19:50:05.538024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.346 [2024-11-26 19:50:05.538049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.346 [2024-11-26 19:50:05.538055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.346 [2024-11-26 19:50:05.541159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.346 [2024-11-26 19:50:05.541185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.346 [2024-11-26 19:50:05.541191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.346 [2024-11-26 19:50:05.544349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.346 [2024-11-26 19:50:05.544377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.346 [2024-11-26 19:50:05.544383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.346 [2024-11-26 19:50:05.547488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.346 [2024-11-26 19:50:05.547513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.346 [2024-11-26 19:50:05.547519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.346 [2024-11-26 19:50:05.550597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.346 [2024-11-26 19:50:05.550623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.346 [2024-11-26 19:50:05.550629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.346 [2024-11-26 19:50:05.553835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.346 [2024-11-26 19:50:05.553860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.346 [2024-11-26 19:50:05.553866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.346 [2024-11-26 19:50:05.557018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.346 [2024-11-26 19:50:05.557043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.346 [2024-11-26 19:50:05.557049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.346 [2024-11-26 19:50:05.560108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.346 [2024-11-26 19:50:05.560134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.346 [2024-11-26 19:50:05.560139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.346 [2024-11-26 19:50:05.563288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.346 [2024-11-26 19:50:05.563314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.346 [2024-11-26 19:50:05.563320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.346 [2024-11-26 19:50:05.566455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.346 [2024-11-26 19:50:05.566480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.346 [2024-11-26 19:50:05.566486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.346 [2024-11-26 19:50:05.569619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.346 [2024-11-26 19:50:05.569645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.346 [2024-11-26 19:50:05.569651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.346 [2024-11-26 19:50:05.572778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.347 [2024-11-26 19:50:05.572803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.347 [2024-11-26 19:50:05.572808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.347 [2024-11-26 19:50:05.575866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.347 [2024-11-26 19:50:05.575890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.347 [2024-11-26 19:50:05.575896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.347 [2024-11-26 19:50:05.579052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.347 [2024-11-26 19:50:05.579086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.347 [2024-11-26 19:50:05.579091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.347 [2024-11-26 19:50:05.582249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.347 [2024-11-26 19:50:05.582276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.347 [2024-11-26 19:50:05.582282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.347 [2024-11-26 19:50:05.585367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.347 [2024-11-26 19:50:05.585393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.347 [2024-11-26 19:50:05.585400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.607 [2024-11-26 19:50:05.588502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.607 [2024-11-26 19:50:05.588528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.607 [2024-11-26 19:50:05.588534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.607 [2024-11-26 19:50:05.591668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.607 [2024-11-26 19:50:05.591693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.607 [2024-11-26 19:50:05.591699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.607 [2024-11-26 19:50:05.594829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.607 [2024-11-26 19:50:05.594853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.607 [2024-11-26 19:50:05.594858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.607 [2024-11-26 19:50:05.597994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.607 [2024-11-26 19:50:05.598020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.607 [2024-11-26 19:50:05.598026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.607 [2024-11-26 19:50:05.601157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.607 [2024-11-26 19:50:05.601183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.607 [2024-11-26 19:50:05.601189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.607 [2024-11-26 19:50:05.604340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.607 [2024-11-26 19:50:05.604366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.607 [2024-11-26 19:50:05.604372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.607 [2024-11-26 19:50:05.607599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.607 [2024-11-26 19:50:05.607624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.607 [2024-11-26 19:50:05.607630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.607 [2024-11-26 19:50:05.610816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.607 [2024-11-26 19:50:05.610840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.607 [2024-11-26 19:50:05.610846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.607 [2024-11-26 19:50:05.613985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.607 [2024-11-26 19:50:05.614011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.607 [2024-11-26 19:50:05.614017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.607 [2024-11-26 19:50:05.617176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.607 [2024-11-26 19:50:05.617201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.607 [2024-11-26 19:50:05.617207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.607 [2024-11-26 19:50:05.620299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.607 [2024-11-26 19:50:05.620324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.607 [2024-11-26 19:50:05.620329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.607 [2024-11-26 19:50:05.623475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.607 [2024-11-26 19:50:05.623501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.607 [2024-11-26 19:50:05.623507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.607 [2024-11-26 19:50:05.626710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.607 [2024-11-26 19:50:05.626735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.608 [2024-11-26 19:50:05.626741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.608 [2024-11-26 19:50:05.629945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.608 [2024-11-26 19:50:05.629971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.608 [2024-11-26 19:50:05.629977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.608 [2024-11-26 19:50:05.633199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.608 [2024-11-26 19:50:05.633225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.608 [2024-11-26 19:50:05.633231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.608 [2024-11-26 19:50:05.636345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.608 [2024-11-26 19:50:05.636371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.608 [2024-11-26 19:50:05.636377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.608 [2024-11-26 19:50:05.639519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.608 [2024-11-26 19:50:05.639545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.608 [2024-11-26 19:50:05.639551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.608 [2024-11-26 19:50:05.642659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.608 [2024-11-26 19:50:05.642685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.608 [2024-11-26 19:50:05.642691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.608 [2024-11-26 19:50:05.645870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.608 [2024-11-26 19:50:05.645895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.608 [2024-11-26 19:50:05.645901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.608 [2024-11-26 19:50:05.649024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.608 [2024-11-26 19:50:05.649050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.608 [2024-11-26 19:50:05.649056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.608 [2024-11-26 19:50:05.652199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.608 [2024-11-26 19:50:05.652224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.608 [2024-11-26 19:50:05.652230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.608 [2024-11-26 19:50:05.655369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.608 [2024-11-26 19:50:05.655395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.608 [2024-11-26 19:50:05.655401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.608 [2024-11-26 19:50:05.658505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.608 [2024-11-26 19:50:05.658530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.608 [2024-11-26 19:50:05.658536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.608 [2024-11-26 19:50:05.661656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.608 [2024-11-26 19:50:05.661682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.608 [2024-11-26 19:50:05.661688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.608 [2024-11-26 19:50:05.664817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.608 [2024-11-26 19:50:05.664841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.608 [2024-11-26 19:50:05.664847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.608 [2024-11-26 19:50:05.667983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.608 [2024-11-26 19:50:05.668009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.608 [2024-11-26 19:50:05.668015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.608 [2024-11-26 19:50:05.671139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.608 [2024-11-26 19:50:05.671165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.608 [2024-11-26 19:50:05.671171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.608 [2024-11-26 19:50:05.674317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.608 [2024-11-26 19:50:05.674342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.608 [2024-11-26 19:50:05.674348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.608 [2024-11-26 19:50:05.677463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.608 [2024-11-26 19:50:05.677490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.608 [2024-11-26 19:50:05.677496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.608 [2024-11-26 19:50:05.680662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.608 [2024-11-26 19:50:05.680689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.608 [2024-11-26 19:50:05.680695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.608 [2024-11-26 19:50:05.683823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.608 [2024-11-26 19:50:05.683847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.608 [2024-11-26 19:50:05.683853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.608 [2024-11-26 19:50:05.686949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.608 [2024-11-26 19:50:05.686974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.608 [2024-11-26 19:50:05.686980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.608 [2024-11-26 19:50:05.689997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.608 [2024-11-26 19:50:05.690022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.608 [2024-11-26 19:50:05.690028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.608 [2024-11-26 19:50:05.693184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.608 [2024-11-26 19:50:05.693210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.608 [2024-11-26 19:50:05.693215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.608 [2024-11-26 19:50:05.696320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.608 [2024-11-26 19:50:05.696346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.608 [2024-11-26 19:50:05.696352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.608 [2024-11-26 19:50:05.699428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.608 [2024-11-26 19:50:05.699453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.608 [2024-11-26 19:50:05.699459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.609 [2024-11-26 19:50:05.702588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.609 [2024-11-26 19:50:05.702614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.609 [2024-11-26 19:50:05.702620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.609 [2024-11-26 19:50:05.705786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.609 [2024-11-26 19:50:05.705810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.609 [2024-11-26 19:50:05.705816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.609 [2024-11-26 19:50:05.708947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.609 [2024-11-26 19:50:05.708973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.609 [2024-11-26 19:50:05.708979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.609 [2024-11-26 19:50:05.712100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.609 [2024-11-26 19:50:05.712126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.609 [2024-11-26 19:50:05.712132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.609 [2024-11-26 19:50:05.715274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.609 [2024-11-26 19:50:05.715301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.609 [2024-11-26 19:50:05.715306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.609 [2024-11-26 19:50:05.718424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.609 [2024-11-26 19:50:05.718449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.609 [2024-11-26 19:50:05.718455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.609 [2024-11-26 19:50:05.721626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.609 [2024-11-26 19:50:05.721652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.609 [2024-11-26 19:50:05.721657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.609 [2024-11-26 19:50:05.724781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.609 [2024-11-26 19:50:05.724805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.609 [2024-11-26 19:50:05.724812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.609 [2024-11-26 19:50:05.727932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.609 [2024-11-26 19:50:05.727957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.609 [2024-11-26 19:50:05.727963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.609 [2024-11-26 19:50:05.731202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.609 [2024-11-26 19:50:05.731229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.609 [2024-11-26 19:50:05.731235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.609 [2024-11-26 19:50:05.734347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.609 [2024-11-26 19:50:05.734374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.609 [2024-11-26 19:50:05.734379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.609 [2024-11-26 19:50:05.737487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.609 [2024-11-26 19:50:05.737513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.609 [2024-11-26 19:50:05.737519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.609 [2024-11-26 19:50:05.740643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.609 [2024-11-26 19:50:05.740669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.609 [2024-11-26 19:50:05.740675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.609 [2024-11-26 19:50:05.743814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.609 [2024-11-26 19:50:05.743838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.609 [2024-11-26 19:50:05.743844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.609 [2024-11-26 19:50:05.747011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.609 [2024-11-26 19:50:05.747036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.609 [2024-11-26 19:50:05.747042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.610 [2024-11-26 19:50:05.750204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.610 [2024-11-26 19:50:05.750228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.610 [2024-11-26 19:50:05.750235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.610 [2024-11-26 19:50:05.753383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.610 [2024-11-26 19:50:05.753408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.610 [2024-11-26 19:50:05.753414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.610 [2024-11-26 19:50:05.756516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.610 [2024-11-26 19:50:05.756542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.610 [2024-11-26 19:50:05.756548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.610 [2024-11-26 19:50:05.759672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.610 [2024-11-26 19:50:05.759697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.610 [2024-11-26 19:50:05.759703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.610 [2024-11-26 19:50:05.762808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.610 [2024-11-26 19:50:05.762832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.610 [2024-11-26 19:50:05.762838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.610 [2024-11-26 19:50:05.765963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.610 [2024-11-26 19:50:05.765987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.610 [2024-11-26 19:50:05.765993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.610 [2024-11-26 19:50:05.769250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.610 [2024-11-26 19:50:05.769275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.610 [2024-11-26 19:50:05.769281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.610 [2024-11-26 19:50:05.772377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.610 [2024-11-26 19:50:05.772402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.610 [2024-11-26 19:50:05.772408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.610 [2024-11-26 19:50:05.775602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.610 [2024-11-26 19:50:05.775627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.610 [2024-11-26 19:50:05.775633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.610 [2024-11-26 19:50:05.778786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.610 [2024-11-26 19:50:05.778809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.610 [2024-11-26 19:50:05.778815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.610 [2024-11-26 19:50:05.781985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.610 [2024-11-26 19:50:05.782011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.610 [2024-11-26 19:50:05.782017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.610 [2024-11-26 19:50:05.785184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.610 [2024-11-26 19:50:05.785209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.610 [2024-11-26 19:50:05.785215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.610 [2024-11-26 19:50:05.788410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.610 [2024-11-26 19:50:05.788436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.610 [2024-11-26 19:50:05.788442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.610 [2024-11-26 19:50:05.791573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.610 [2024-11-26 19:50:05.791598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.610 [2024-11-26 19:50:05.791604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.610 [2024-11-26 19:50:05.794692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.610 [2024-11-26 19:50:05.794718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.610 [2024-11-26 19:50:05.794724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.610 [2024-11-26 19:50:05.797920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.610 [2024-11-26 19:50:05.797945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.610 [2024-11-26 19:50:05.797951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.610 [2024-11-26 19:50:05.801157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.610 [2024-11-26 19:50:05.801182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.610 [2024-11-26 19:50:05.801188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.610 [2024-11-26 19:50:05.804343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.610 [2024-11-26 19:50:05.804369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.610 [2024-11-26 19:50:05.804375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.610 [2024-11-26 19:50:05.807523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.611 [2024-11-26 19:50:05.807549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.611 [2024-11-26 19:50:05.807555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.611 [2024-11-26 19:50:05.810689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.611 [2024-11-26 19:50:05.810713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.611 [2024-11-26 19:50:05.810719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.611 [2024-11-26 19:50:05.813857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.611 [2024-11-26 19:50:05.813881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.611 [2024-11-26 19:50:05.813887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.611 [2024-11-26 19:50:05.817044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.611 [2024-11-26 19:50:05.817069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.611 [2024-11-26 19:50:05.817075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.611 [2024-11-26 19:50:05.820220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.611 [2024-11-26 19:50:05.820245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.611 [2024-11-26 19:50:05.820251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.611 [2024-11-26 19:50:05.823376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.611 [2024-11-26 19:50:05.823402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.611 [2024-11-26 19:50:05.823408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.611 [2024-11-26 19:50:05.826559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.611 [2024-11-26 19:50:05.826584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.611 [2024-11-26 19:50:05.826590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.611 [2024-11-26 19:50:05.829705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.611 [2024-11-26 19:50:05.829730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.611 [2024-11-26 19:50:05.829737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.611 [2024-11-26 19:50:05.832883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.611 [2024-11-26 19:50:05.832908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.611 [2024-11-26 19:50:05.832913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.611 [2024-11-26 19:50:05.836090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.611 [2024-11-26 19:50:05.836115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.611 [2024-11-26 19:50:05.836122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.611 [2024-11-26 19:50:05.839210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.611 [2024-11-26 19:50:05.839234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.611 [2024-11-26 19:50:05.839240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.611 [2024-11-26 19:50:05.842411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.611 [2024-11-26 19:50:05.842436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.611 [2024-11-26 19:50:05.842441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.611 [2024-11-26 19:50:05.845594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.612 [2024-11-26 19:50:05.845620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.612 [2024-11-26 19:50:05.845626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.612 [2024-11-26 19:50:05.848775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.612 [2024-11-26 19:50:05.848799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.612 [2024-11-26 19:50:05.848805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.872 [2024-11-26 19:50:05.851969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.872 [2024-11-26 19:50:05.851994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.872 [2024-11-26 19:50:05.852000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.872 [2024-11-26 19:50:05.855079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.872 [2024-11-26 19:50:05.855103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.872 [2024-11-26 19:50:05.855109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.872 [2024-11-26 19:50:05.858272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.872 [2024-11-26 19:50:05.858297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.872 [2024-11-26 19:50:05.858302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.872 [2024-11-26 19:50:05.861488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.872 [2024-11-26 19:50:05.861513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.872 [2024-11-26 19:50:05.861519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.872 [2024-11-26 19:50:05.864703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.873 [2024-11-26 19:50:05.864729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.873 [2024-11-26 19:50:05.864735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.873 [2024-11-26 19:50:05.867847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.873 [2024-11-26 19:50:05.867871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.873 [2024-11-26 19:50:05.867877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.873 [2024-11-26 19:50:05.871009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.873 [2024-11-26 19:50:05.871034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.873 [2024-11-26 19:50:05.871039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.873 [2024-11-26 19:50:05.874170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.873 [2024-11-26 19:50:05.874196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.873 [2024-11-26 19:50:05.874201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.873 [2024-11-26 19:50:05.877359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.873 [2024-11-26 19:50:05.877385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.873 [2024-11-26 19:50:05.877391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.873 [2024-11-26 19:50:05.880479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.873 [2024-11-26 19:50:05.880505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.873 [2024-11-26 19:50:05.880510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.873 [2024-11-26 19:50:05.883667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.873 [2024-11-26 19:50:05.883694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.873 [2024-11-26 19:50:05.883700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.873 [2024-11-26 19:50:05.886854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.873 [2024-11-26 19:50:05.886882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.873 [2024-11-26 19:50:05.886887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.873 [2024-11-26 19:50:05.890015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.873 [2024-11-26 19:50:05.890041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.873 [2024-11-26 19:50:05.890047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.873 [2024-11-26 19:50:05.893192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.873 [2024-11-26 19:50:05.893217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.873 [2024-11-26 19:50:05.893223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.873 [2024-11-26 19:50:05.896377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.873 [2024-11-26 19:50:05.896542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.873 [2024-11-26 19:50:05.896551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.873 [2024-11-26 19:50:05.899713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.873 [2024-11-26 19:50:05.899736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.873 [2024-11-26 19:50:05.899743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.873 [2024-11-26 19:50:05.902973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.873 [2024-11-26 19:50:05.903082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.873 [2024-11-26 19:50:05.903143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.873 [2024-11-26 19:50:05.906338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.873 [2024-11-26 19:50:05.906435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.873 [2024-11-26 19:50:05.906481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.873 [2024-11-26 19:50:05.909666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.873 [2024-11-26 19:50:05.909776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.873 [2024-11-26 19:50:05.909823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.873 [2024-11-26 19:50:05.912937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.873 [2024-11-26 19:50:05.913035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.873 [2024-11-26 19:50:05.913081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.873 [2024-11-26 19:50:05.916243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.873 [2024-11-26 19:50:05.916341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.873 [2024-11-26 19:50:05.916385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.873 [2024-11-26 19:50:05.919519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.873 [2024-11-26 19:50:05.919616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.873 [2024-11-26 19:50:05.919662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.873 [2024-11-26 19:50:05.922811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.873 [2024-11-26 19:50:05.922902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.873 [2024-11-26 19:50:05.922948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.873 [2024-11-26 19:50:05.926039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.873 [2024-11-26 19:50:05.926134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.873 [2024-11-26 19:50:05.926207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.873 [2024-11-26 19:50:05.929295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.873 [2024-11-26 19:50:05.929391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.873 [2024-11-26 19:50:05.929434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.873 [2024-11-26 19:50:05.932523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.873 [2024-11-26 19:50:05.932618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.873 [2024-11-26 19:50:05.932661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.874 [2024-11-26 19:50:05.935863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.874 [2024-11-26 19:50:05.935959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.874 [2024-11-26 19:50:05.936003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.874 [2024-11-26 19:50:05.939102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.874 [2024-11-26 19:50:05.939196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.874 [2024-11-26 19:50:05.939238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.874 [2024-11-26 19:50:05.942323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.874 [2024-11-26 19:50:05.942417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.874 [2024-11-26 19:50:05.942459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.874 [2024-11-26 19:50:05.945656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.874 [2024-11-26 19:50:05.945752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.874 [2024-11-26 19:50:05.945807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.874 [2024-11-26 19:50:05.948907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.874 [2024-11-26 19:50:05.949002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.874 [2024-11-26 19:50:05.949044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.874 [2024-11-26 19:50:05.952186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.874 [2024-11-26 19:50:05.952282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.874 [2024-11-26 19:50:05.952324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.874 [2024-11-26 19:50:05.955516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.874 [2024-11-26 19:50:05.955611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.874 [2024-11-26 19:50:05.955653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.874 [2024-11-26 19:50:05.958822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.874 [2024-11-26 19:50:05.958916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.874 [2024-11-26 19:50:05.958974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.874 [2024-11-26 19:50:05.962095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.874 [2024-11-26 19:50:05.962186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.874 [2024-11-26 19:50:05.962227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.874 [2024-11-26 19:50:05.965352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.874 [2024-11-26 19:50:05.965449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.874 [2024-11-26 19:50:05.965495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.874 [2024-11-26 19:50:05.968844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.874 [2024-11-26 19:50:05.968872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.874 [2024-11-26 19:50:05.968878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.874 [2024-11-26 19:50:05.972058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.874 [2024-11-26 19:50:05.972155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.874 [2024-11-26 19:50:05.972162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.874 [2024-11-26 19:50:05.975305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.874 [2024-11-26 19:50:05.975330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.874 [2024-11-26 19:50:05.975336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.874 [2024-11-26 19:50:05.978462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.874 [2024-11-26 19:50:05.978487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.874 [2024-11-26 19:50:05.978493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.874 [2024-11-26 19:50:05.981609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.874 [2024-11-26 19:50:05.981635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.874 [2024-11-26 19:50:05.981641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.874 [2024-11-26 19:50:05.984780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.874 [2024-11-26 19:50:05.984803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.874 [2024-11-26 19:50:05.984809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.874 [2024-11-26 19:50:05.987861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.874 [2024-11-26 19:50:05.987887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.874 [2024-11-26 19:50:05.987908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.874 [2024-11-26 19:50:05.991080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.874 [2024-11-26 19:50:05.991105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.874 [2024-11-26 19:50:05.991111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.874 [2024-11-26 19:50:05.994239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.874 [2024-11-26 19:50:05.994337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.874 [2024-11-26 19:50:05.994344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.874 [2024-11-26 19:50:05.997484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.874 [2024-11-26 19:50:05.997511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.874 [2024-11-26 19:50:05.997517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.874 [2024-11-26 19:50:06.000647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.874 [2024-11-26 19:50:06.000670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.874 [2024-11-26 19:50:06.000676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.874 [2024-11-26 19:50:06.003818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.874 [2024-11-26 19:50:06.003839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.875 [2024-11-26 19:50:06.003845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.875 [2024-11-26 19:50:06.006957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.875 [2024-11-26 19:50:06.006978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.875 [2024-11-26 19:50:06.006984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.875 [2024-11-26 19:50:06.010105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.875 [2024-11-26 19:50:06.010126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.875 [2024-11-26 19:50:06.010132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.875 [2024-11-26 19:50:06.013250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.875 [2024-11-26 19:50:06.013271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.875 [2024-11-26 19:50:06.013277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.875 [2024-11-26 19:50:06.016457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.875 [2024-11-26 19:50:06.016479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.875 [2024-11-26 19:50:06.016485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.875 [2024-11-26 19:50:06.019649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.875 [2024-11-26 19:50:06.019670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.875 [2024-11-26 19:50:06.019676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.875 [2024-11-26 19:50:06.022723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.875 [2024-11-26 19:50:06.022745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.875 [2024-11-26 19:50:06.022750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.875 [2024-11-26 19:50:06.025895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.875 [2024-11-26 19:50:06.025916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.875 [2024-11-26 19:50:06.025922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.875 [2024-11-26 19:50:06.029009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.875 [2024-11-26 19:50:06.029030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.875 [2024-11-26 19:50:06.029035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.875 [2024-11-26 19:50:06.032091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.875 [2024-11-26 19:50:06.032112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.875 [2024-11-26 19:50:06.032118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.875 [2024-11-26 19:50:06.035254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.875 [2024-11-26 19:50:06.035275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.875 [2024-11-26 19:50:06.035281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.875 [2024-11-26 19:50:06.038399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.875 [2024-11-26 19:50:06.038420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.875 [2024-11-26 19:50:06.038425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.875 [2024-11-26 19:50:06.041553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.875 [2024-11-26 19:50:06.041574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.875 [2024-11-26 19:50:06.041580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.875 [2024-11-26 19:50:06.044656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.875 [2024-11-26 19:50:06.044678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.875 [2024-11-26 19:50:06.044683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.875 [2024-11-26 19:50:06.047806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.875 [2024-11-26 19:50:06.047826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.875 [2024-11-26 19:50:06.047832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.875 [2024-11-26 19:50:06.050911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.875 [2024-11-26 19:50:06.050931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.875 [2024-11-26 19:50:06.050937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.875 [2024-11-26 19:50:06.054061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.875 [2024-11-26 19:50:06.054083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.875 [2024-11-26 19:50:06.054089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.875 [2024-11-26 19:50:06.057192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.875 [2024-11-26 19:50:06.057214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.875 [2024-11-26 19:50:06.057220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.875 [2024-11-26 19:50:06.060364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.875 [2024-11-26 19:50:06.060385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.875 [2024-11-26 19:50:06.060391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.875 [2024-11-26 19:50:06.063476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.875 [2024-11-26 19:50:06.063498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.875 [2024-11-26 19:50:06.063504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.875 [2024-11-26 19:50:06.066628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.875 [2024-11-26 19:50:06.066650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.875 [2024-11-26 19:50:06.066655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.876 [2024-11-26 19:50:06.069734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.876 [2024-11-26 19:50:06.069756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.876 [2024-11-26 19:50:06.069761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.876 [2024-11-26 19:50:06.072837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.876 [2024-11-26 19:50:06.072858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.876 [2024-11-26 19:50:06.072863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.876 [2024-11-26 19:50:06.075969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.876 [2024-11-26 19:50:06.075991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.876 [2024-11-26 19:50:06.075996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.876 [2024-11-26 19:50:06.079113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.876 [2024-11-26 19:50:06.079133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.876 [2024-11-26 19:50:06.079139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.876 [2024-11-26 19:50:06.082193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.876 [2024-11-26 19:50:06.082214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.876 [2024-11-26 19:50:06.082220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.876 [2024-11-26 19:50:06.085289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.876 [2024-11-26 19:50:06.085310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.876 [2024-11-26 19:50:06.085316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.876 [2024-11-26 19:50:06.088365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.876 [2024-11-26 19:50:06.088387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.876 [2024-11-26 19:50:06.088392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.876 [2024-11-26 19:50:06.091486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.876 [2024-11-26 19:50:06.091508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.876 [2024-11-26 19:50:06.091514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.876 [2024-11-26 19:50:06.094644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.876 [2024-11-26 19:50:06.094666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.876 [2024-11-26 19:50:06.094671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.876 [2024-11-26 19:50:06.097793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.876 [2024-11-26 19:50:06.097814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.876 [2024-11-26 19:50:06.097819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.876 [2024-11-26 19:50:06.100907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.876 [2024-11-26 19:50:06.100928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.876 [2024-11-26 19:50:06.100934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:10.876 [2024-11-26 19:50:06.104078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.876 [2024-11-26 19:50:06.104100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.876 [2024-11-26 19:50:06.104106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:10.876 [2024-11-26 19:50:06.107240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.876 [2024-11-26 19:50:06.107261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.876 [2024-11-26 19:50:06.107267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:10.876 [2024-11-26 19:50:06.110318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.876 [2024-11-26 19:50:06.110339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.876 [2024-11-26 19:50:06.110345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:10.876 [2024-11-26 19:50:06.113433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:10.876 [2024-11-26 19:50:06.113455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.876 [2024-11-26 19:50:06.113460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.137 [2024-11-26 19:50:06.116516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.137 [2024-11-26 19:50:06.116537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.137 [2024-11-26 19:50:06.116543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.137 [2024-11-26 19:50:06.119638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.137 [2024-11-26 19:50:06.119660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.137 [2024-11-26 19:50:06.119666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.137 [2024-11-26 19:50:06.122785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.137 [2024-11-26 19:50:06.122805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.137 [2024-11-26 19:50:06.122811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.137 [2024-11-26 19:50:06.125920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.137 [2024-11-26 19:50:06.125941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.137 [2024-11-26 19:50:06.125946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.137 [2024-11-26 19:50:06.129111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.137 [2024-11-26 19:50:06.129133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.137 [2024-11-26 19:50:06.129138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.137 [2024-11-26 19:50:06.132278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.137 [2024-11-26 19:50:06.132299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.137 [2024-11-26 19:50:06.132304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.137 [2024-11-26 19:50:06.135428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.137 [2024-11-26 19:50:06.135449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.137 [2024-11-26 19:50:06.135455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.137 [2024-11-26 19:50:06.138513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.137 [2024-11-26 19:50:06.138535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.137 [2024-11-26 19:50:06.138540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.137 [2024-11-26 19:50:06.141730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.137 [2024-11-26 19:50:06.141752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.137 [2024-11-26 19:50:06.141758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.138 [2024-11-26 19:50:06.144846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.138 [2024-11-26 19:50:06.144867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.138 [2024-11-26 19:50:06.144873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.138 [2024-11-26 19:50:06.148025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.138 [2024-11-26 19:50:06.148047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.138 [2024-11-26 19:50:06.148053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.138 [2024-11-26 19:50:06.151195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.138 [2024-11-26 19:50:06.151217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.138 [2024-11-26 19:50:06.151223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.138 [2024-11-26 19:50:06.154364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.138 [2024-11-26 19:50:06.154386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.138 [2024-11-26 19:50:06.154392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.138 [2024-11-26 19:50:06.157570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.138 [2024-11-26 19:50:06.157591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.138 [2024-11-26 19:50:06.157597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.138 [2024-11-26 19:50:06.160780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.138 [2024-11-26 19:50:06.160800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.138 [2024-11-26 19:50:06.160806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.138 [2024-11-26 19:50:06.163903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.138 [2024-11-26 19:50:06.163924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.138 [2024-11-26 19:50:06.163930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.138 [2024-11-26 19:50:06.167012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.138 [2024-11-26 19:50:06.167033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.138 [2024-11-26 19:50:06.167039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.138 [2024-11-26 19:50:06.170126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.138 [2024-11-26 19:50:06.170147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.138 [2024-11-26 19:50:06.170153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.138 [2024-11-26 19:50:06.173220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.138 [2024-11-26 19:50:06.173241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.138 [2024-11-26 19:50:06.173247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.138 [2024-11-26 19:50:06.176379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.138 [2024-11-26 19:50:06.176400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.138 [2024-11-26 19:50:06.176406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.138 [2024-11-26 19:50:06.179542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.138 [2024-11-26 19:50:06.179564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.138 [2024-11-26 19:50:06.179570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.138 [2024-11-26 19:50:06.182666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.138 [2024-11-26 19:50:06.182688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.138 [2024-11-26 19:50:06.182694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.138 [2024-11-26 19:50:06.185861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.138 [2024-11-26 19:50:06.185882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.138 [2024-11-26 19:50:06.185888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.138 [2024-11-26 19:50:06.188968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.138 [2024-11-26 19:50:06.188989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.138 [2024-11-26 19:50:06.188995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.138 [2024-11-26 19:50:06.192102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.138 [2024-11-26 19:50:06.192124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.138 [2024-11-26 19:50:06.192129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.138 [2024-11-26 19:50:06.195252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.138 [2024-11-26 19:50:06.195274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.138 [2024-11-26 19:50:06.195280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.138 [2024-11-26 19:50:06.198410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.138 [2024-11-26 19:50:06.198431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.138 [2024-11-26 19:50:06.198436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.138 [2024-11-26 19:50:06.201570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.138 [2024-11-26 19:50:06.201591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.138 [2024-11-26 19:50:06.201596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.138 [2024-11-26 19:50:06.204700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.138 [2024-11-26 19:50:06.204721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.138 [2024-11-26 19:50:06.204727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.138 [2024-11-26 19:50:06.207956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.138 [2024-11-26 19:50:06.207976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.138 [2024-11-26 19:50:06.207982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.138 [2024-11-26 19:50:06.211130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.138 [2024-11-26 19:50:06.211153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.138 [2024-11-26 19:50:06.211160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.138 [2024-11-26 19:50:06.214362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.138 [2024-11-26 19:50:06.214384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.138 [2024-11-26 19:50:06.214390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.138 [2024-11-26 19:50:06.217535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.138 [2024-11-26 19:50:06.217558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.138 [2024-11-26 19:50:06.217564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.138 [2024-11-26 19:50:06.220734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.138 [2024-11-26 19:50:06.220757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.138 [2024-11-26 19:50:06.220763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.138 [2024-11-26 19:50:06.223956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.138 [2024-11-26 19:50:06.223978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.138 [2024-11-26 19:50:06.223984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.138 [2024-11-26 19:50:06.227164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.138 [2024-11-26 19:50:06.227186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.138 [2024-11-26 19:50:06.227192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.139 [2024-11-26 19:50:06.230308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.139 [2024-11-26 19:50:06.230330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.139 [2024-11-26 19:50:06.230336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.139 [2024-11-26 19:50:06.233465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.139 [2024-11-26 19:50:06.233487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.139 [2024-11-26 19:50:06.233493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.139 [2024-11-26 19:50:06.236594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.139 [2024-11-26 19:50:06.236616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.139 [2024-11-26 19:50:06.236622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.139 [2024-11-26 19:50:06.239744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.139 [2024-11-26 19:50:06.239774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.139 [2024-11-26 19:50:06.239781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.139 [2024-11-26 19:50:06.242881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.139 [2024-11-26 19:50:06.242901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.139 [2024-11-26 19:50:06.242907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.139 [2024-11-26 19:50:06.246048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.139 [2024-11-26 19:50:06.246070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.139 [2024-11-26 19:50:06.246076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.139 [2024-11-26 19:50:06.249203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.139 [2024-11-26 19:50:06.249225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.139 [2024-11-26 19:50:06.249230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.139 [2024-11-26 19:50:06.252277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.139 [2024-11-26 19:50:06.252298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.139 [2024-11-26 19:50:06.252304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.139 [2024-11-26 19:50:06.255454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.139 [2024-11-26 19:50:06.255476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.139 [2024-11-26 19:50:06.255482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.139 [2024-11-26 19:50:06.258616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.139 [2024-11-26 19:50:06.258637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.139 [2024-11-26 19:50:06.258643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.139 [2024-11-26 19:50:06.261734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.139 [2024-11-26 19:50:06.261756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.139 [2024-11-26 19:50:06.261762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.139 [2024-11-26 19:50:06.264909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.139 [2024-11-26 19:50:06.264931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.139 [2024-11-26 19:50:06.264937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.139 [2024-11-26 19:50:06.268155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.139 [2024-11-26 19:50:06.268177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.139 [2024-11-26 19:50:06.268182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.139 [2024-11-26 19:50:06.271350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.139 [2024-11-26 19:50:06.271371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.139 [2024-11-26 19:50:06.271377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.139 [2024-11-26 19:50:06.274505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.139 [2024-11-26 19:50:06.274527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.139 [2024-11-26 19:50:06.274532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.139 [2024-11-26 19:50:06.277696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.139 [2024-11-26 19:50:06.277718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.139 [2024-11-26 19:50:06.277724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.139 [2024-11-26 19:50:06.280891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.139 [2024-11-26 19:50:06.280911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.139 [2024-11-26 19:50:06.280916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.139 [2024-11-26 19:50:06.284064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.139 [2024-11-26 19:50:06.284086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.139 [2024-11-26 19:50:06.284092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.139 [2024-11-26 19:50:06.287182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.139 [2024-11-26 19:50:06.287203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.139 [2024-11-26 19:50:06.287209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.139 [2024-11-26 19:50:06.290341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.139 [2024-11-26 19:50:06.290363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.139 [2024-11-26 19:50:06.290369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.139 [2024-11-26 19:50:06.293515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.139 [2024-11-26 19:50:06.293537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.139 [2024-11-26 19:50:06.293543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.139 [2024-11-26 19:50:06.296642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.139 [2024-11-26 19:50:06.296665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.139 [2024-11-26 19:50:06.296670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.139 [2024-11-26 19:50:06.299838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.139 [2024-11-26 19:50:06.299859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.139 [2024-11-26 19:50:06.299865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.139 [2024-11-26 19:50:06.303076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.139 [2024-11-26 19:50:06.303099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.139 [2024-11-26 19:50:06.303105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.139 [2024-11-26 19:50:06.306267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.139 [2024-11-26 19:50:06.306290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.139 [2024-11-26 19:50:06.306296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.139 [2024-11-26 19:50:06.309465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.139 [2024-11-26 19:50:06.309487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.139 [2024-11-26 19:50:06.309493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.139 [2024-11-26 19:50:06.312611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.139 [2024-11-26 19:50:06.312634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.139 [2024-11-26 19:50:06.312640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.140 [2024-11-26 19:50:06.315833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.140 [2024-11-26 19:50:06.315853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.140 [2024-11-26 19:50:06.315859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.140 [2024-11-26 19:50:06.319007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.140 [2024-11-26 19:50:06.319028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.140 [2024-11-26 19:50:06.319034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.140 [2024-11-26 19:50:06.322163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.140 [2024-11-26 19:50:06.322184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.140 [2024-11-26 19:50:06.322190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.140 [2024-11-26 19:50:06.325352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.140 [2024-11-26 19:50:06.325374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.140 [2024-11-26 19:50:06.325380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.140 [2024-11-26 19:50:06.328562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.140 [2024-11-26 19:50:06.328584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.140 [2024-11-26 19:50:06.328591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.140 [2024-11-26 19:50:06.331728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.140 [2024-11-26 19:50:06.331750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.140 [2024-11-26 19:50:06.331756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.140 [2024-11-26 19:50:06.334984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.140 [2024-11-26 19:50:06.335006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.140 [2024-11-26 19:50:06.335011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.140 [2024-11-26 19:50:06.338171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.140 [2024-11-26 19:50:06.338193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.140 [2024-11-26 19:50:06.338199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.140 [2024-11-26 19:50:06.341308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.140 [2024-11-26 19:50:06.341329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.140 [2024-11-26 19:50:06.341335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.140 [2024-11-26 19:50:06.344510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.140 [2024-11-26 19:50:06.344532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.140 [2024-11-26 19:50:06.344537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.140 [2024-11-26 19:50:06.347716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.140 [2024-11-26 19:50:06.347737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.140 [2024-11-26 19:50:06.347743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.140 [2024-11-26 19:50:06.350935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.140 [2024-11-26 19:50:06.350956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.140 [2024-11-26 19:50:06.350962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.140 [2024-11-26 19:50:06.354183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.140 [2024-11-26 19:50:06.354203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.140 [2024-11-26 19:50:06.354210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.140 [2024-11-26 19:50:06.357321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.140 [2024-11-26 19:50:06.357342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.140 [2024-11-26 19:50:06.357348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.140 [2024-11-26 19:50:06.360528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.140 [2024-11-26 19:50:06.360549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.140 [2024-11-26 19:50:06.360555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.140 [2024-11-26 19:50:06.363708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.140 [2024-11-26 19:50:06.363730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.140 [2024-11-26 19:50:06.363736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.140 [2024-11-26 19:50:06.366846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.140 [2024-11-26 19:50:06.366866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.140 [2024-11-26 19:50:06.366872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.140 9718.00 IOPS, 1214.75 MiB/s [2024-11-26T19:50:06.387Z] [2024-11-26 19:50:06.371298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.140 [2024-11-26 19:50:06.371320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.140 [2024-11-26 19:50:06.371326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.140 [2024-11-26 19:50:06.374506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.140 [2024-11-26 19:50:06.374528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.140 [2024-11-26 19:50:06.374534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.140 [2024-11-26 19:50:06.377674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.140 [2024-11-26 19:50:06.377696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.140 [2024-11-26 19:50:06.377701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.140 [2024-11-26 19:50:06.380890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.140 [2024-11-26 19:50:06.380911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.140 [2024-11-26 19:50:06.380917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.402 [2024-11-26 19:50:06.384087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.402 [2024-11-26 19:50:06.384108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.402 [2024-11-26 19:50:06.384114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.402 [2024-11-26 19:50:06.387255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.402 [2024-11-26 19:50:06.387277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.402 [2024-11-26 19:50:06.387282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.402 [2024-11-26 19:50:06.390352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.402 [2024-11-26 19:50:06.390373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.402 [2024-11-26 19:50:06.390379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.402 [2024-11-26 19:50:06.393537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.402 [2024-11-26 19:50:06.393559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.402 [2024-11-26 19:50:06.393564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.402 [2024-11-26 19:50:06.396736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.402 [2024-11-26 19:50:06.396758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.402 [2024-11-26 19:50:06.396775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.402 [2024-11-26 19:50:06.399911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.402 [2024-11-26 19:50:06.399933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.402 [2024-11-26 19:50:06.399939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.402 [2024-11-26 19:50:06.403020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.402 [2024-11-26 19:50:06.403042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.402 [2024-11-26 19:50:06.403048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.402 [2024-11-26 19:50:06.406267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.402 [2024-11-26 19:50:06.406288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.402 [2024-11-26 19:50:06.406294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.402 [2024-11-26 19:50:06.409456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.402 [2024-11-26 19:50:06.409478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.402 [2024-11-26 19:50:06.409484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.402 [2024-11-26 19:50:06.412643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.402 [2024-11-26 19:50:06.412665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.403 [2024-11-26 19:50:06.412671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.403 [2024-11-26 19:50:06.415745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.403 [2024-11-26 19:50:06.415781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.403 [2024-11-26 19:50:06.415787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.403 [2024-11-26 19:50:06.418889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.403 [2024-11-26 19:50:06.418909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.403 [2024-11-26 19:50:06.418915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.403 [2024-11-26 19:50:06.422044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.403 [2024-11-26 19:50:06.422066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.403 [2024-11-26 19:50:06.422072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.403 [2024-11-26 19:50:06.425327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.403 [2024-11-26 19:50:06.425349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.403 [2024-11-26 19:50:06.425355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.403 [2024-11-26 19:50:06.428496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.403 [2024-11-26 19:50:06.428517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.403 [2024-11-26 19:50:06.428523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.403 [2024-11-26 19:50:06.431776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.403 [2024-11-26 19:50:06.431796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.403 [2024-11-26 19:50:06.431802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.403 [2024-11-26 19:50:06.435020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.403 [2024-11-26 19:50:06.435041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.403 [2024-11-26 19:50:06.435046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.403 [2024-11-26 19:50:06.438232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.403 [2024-11-26 19:50:06.438253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.403 [2024-11-26 19:50:06.438260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.403 [2024-11-26 19:50:06.441465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.403 [2024-11-26 19:50:06.441486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.403 [2024-11-26 19:50:06.441492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.403 [2024-11-26 19:50:06.444691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.403 [2024-11-26 19:50:06.444713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.403 [2024-11-26 19:50:06.444719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.403 [2024-11-26 19:50:06.447936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.403 [2024-11-26 19:50:06.447957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.403 [2024-11-26 19:50:06.447963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.403 [2024-11-26 19:50:06.451127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.403 [2024-11-26 19:50:06.451148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.403 [2024-11-26 19:50:06.451154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.403 [2024-11-26 19:50:06.454336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.403 [2024-11-26 19:50:06.454357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.403 [2024-11-26 19:50:06.454363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.403 [2024-11-26 19:50:06.457548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.403 [2024-11-26 19:50:06.457570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.403 [2024-11-26 19:50:06.457576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.403 [2024-11-26 19:50:06.460739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.403 [2024-11-26 19:50:06.460761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.403 [2024-11-26 19:50:06.460777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.403 [2024-11-26 19:50:06.464012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.403 [2024-11-26 19:50:06.464034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.403 [2024-11-26 19:50:06.464040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.403 [2024-11-26 19:50:06.467224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.403 [2024-11-26 19:50:06.467246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.403 [2024-11-26 19:50:06.467252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.403 [2024-11-26 19:50:06.470379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.403 [2024-11-26 19:50:06.470400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.403 [2024-11-26 19:50:06.470406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.403 [2024-11-26 19:50:06.473614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.403 [2024-11-26 19:50:06.473636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.403 [2024-11-26 19:50:06.473642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.403 [2024-11-26 19:50:06.476811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.403 [2024-11-26 19:50:06.476831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.403 [2024-11-26 19:50:06.476837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.403 [2024-11-26 19:50:06.480062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.403 [2024-11-26 19:50:06.480085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.403 [2024-11-26 19:50:06.480090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.403 [2024-11-26 19:50:06.483288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.403 [2024-11-26 19:50:06.483310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.403 [2024-11-26 19:50:06.483315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.403 [2024-11-26 19:50:06.486441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.403 [2024-11-26 19:50:06.486462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.403 [2024-11-26 19:50:06.486468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.403 [2024-11-26 19:50:06.489671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.403 [2024-11-26 19:50:06.489693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.403 [2024-11-26 19:50:06.489698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.403 [2024-11-26 19:50:06.492847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.403 [2024-11-26 19:50:06.492868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.403 [2024-11-26 19:50:06.492874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.403 [2024-11-26 19:50:06.496057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.403 [2024-11-26 19:50:06.496079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.403 [2024-11-26 19:50:06.496085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.403 [2024-11-26 19:50:06.499243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.403 [2024-11-26 19:50:06.499264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.403 [2024-11-26 19:50:06.499270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.404 [2024-11-26 19:50:06.502462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.404 [2024-11-26 19:50:06.502483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.404 [2024-11-26 19:50:06.502489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.404 [2024-11-26 19:50:06.505687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.404 [2024-11-26 19:50:06.505708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.404 [2024-11-26 19:50:06.505714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.404 [2024-11-26 19:50:06.508856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.404 [2024-11-26 19:50:06.508877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.404 [2024-11-26 19:50:06.508883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.404 [2024-11-26 19:50:06.512010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.404 [2024-11-26 19:50:06.512031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.404 [2024-11-26 19:50:06.512037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.404 [2024-11-26 19:50:06.515099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.404 [2024-11-26 19:50:06.515120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.404 [2024-11-26 19:50:06.515125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.404 [2024-11-26 19:50:06.518229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.404 [2024-11-26 19:50:06.518251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.404 [2024-11-26 19:50:06.518256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.404 [2024-11-26 19:50:06.521369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.404 [2024-11-26 19:50:06.521391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.404 [2024-11-26 19:50:06.521397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.404 [2024-11-26 19:50:06.524584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.404 [2024-11-26 19:50:06.524606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.404 [2024-11-26 19:50:06.524612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.404 [2024-11-26 19:50:06.527720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.404 [2024-11-26 19:50:06.527742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.404 [2024-11-26 19:50:06.527747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.404 [2024-11-26 19:50:06.530893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.404 [2024-11-26 19:50:06.530914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.404 [2024-11-26 19:50:06.530920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.404 [2024-11-26 19:50:06.534120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.404 [2024-11-26 19:50:06.534142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.404 [2024-11-26 19:50:06.534148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.404 [2024-11-26 19:50:06.537328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.404 [2024-11-26 19:50:06.537350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.404 [2024-11-26 19:50:06.537356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.404 [2024-11-26 19:50:06.540543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.404 [2024-11-26 19:50:06.540565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.404 [2024-11-26 19:50:06.540571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.404 [2024-11-26 19:50:06.543719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.404 [2024-11-26 19:50:06.543741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.404 [2024-11-26 19:50:06.543747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.404 [2024-11-26 19:50:06.546919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.404 [2024-11-26 19:50:06.546940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.404 [2024-11-26 19:50:06.546946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.404 [2024-11-26 19:50:06.550143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.404 [2024-11-26 19:50:06.550165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.404 [2024-11-26 19:50:06.550171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.404 [2024-11-26 19:50:06.553300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.404 [2024-11-26 19:50:06.553322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.404 [2024-11-26 19:50:06.553328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.404 [2024-11-26 19:50:06.556520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.404 [2024-11-26 19:50:06.556542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.404 [2024-11-26 19:50:06.556548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.404 [2024-11-26 19:50:06.559685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.404 [2024-11-26 19:50:06.559707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.404 [2024-11-26 19:50:06.559713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.404 [2024-11-26 19:50:06.562820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.404 [2024-11-26 19:50:06.562840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.404 [2024-11-26 19:50:06.562846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.404 [2024-11-26 19:50:06.565985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.404 [2024-11-26 19:50:06.566006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.404 [2024-11-26 19:50:06.566012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.404 [2024-11-26 19:50:06.569157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.404 [2024-11-26 19:50:06.569179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.404 [2024-11-26 19:50:06.569184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.404 [2024-11-26 19:50:06.572321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.404 [2024-11-26 19:50:06.572343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.404 [2024-11-26 19:50:06.572349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.404 [2024-11-26 19:50:06.575474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.404 [2024-11-26 19:50:06.575496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.404 [2024-11-26 19:50:06.575501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.404 [2024-11-26 19:50:06.578618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.404 [2024-11-26 19:50:06.578639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.404 [2024-11-26 19:50:06.578645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.404 [2024-11-26 19:50:06.581819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.404 [2024-11-26 19:50:06.581839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.404 [2024-11-26 19:50:06.581845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.404 [2024-11-26 19:50:06.584966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.404 [2024-11-26 19:50:06.584988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.404 [2024-11-26 19:50:06.584994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.404 [2024-11-26 19:50:06.588111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.405 [2024-11-26 19:50:06.588132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.405 [2024-11-26 19:50:06.588138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.405 [2024-11-26 19:50:06.591292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.405 [2024-11-26 19:50:06.591314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.405 [2024-11-26 19:50:06.591320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.405 [2024-11-26 19:50:06.594443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.405 [2024-11-26 19:50:06.594465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.405 [2024-11-26 19:50:06.594471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.405 [2024-11-26 19:50:06.597605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.405 [2024-11-26 19:50:06.597628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.405 [2024-11-26 19:50:06.597634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.405 [2024-11-26 19:50:06.600758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.405 [2024-11-26 19:50:06.600788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.405 [2024-11-26 19:50:06.600794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.405 [2024-11-26 19:50:06.603981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.405 [2024-11-26 19:50:06.604003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.405 [2024-11-26 19:50:06.604009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.405 [2024-11-26 19:50:06.607183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.405 [2024-11-26 19:50:06.607204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.405 [2024-11-26 19:50:06.607210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.405 [2024-11-26 19:50:06.610415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.405 [2024-11-26 19:50:06.610437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.405 [2024-11-26 19:50:06.610443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.405 [2024-11-26 19:50:06.613588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.405 [2024-11-26 19:50:06.613610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.405 [2024-11-26 19:50:06.613616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.405 [2024-11-26 19:50:06.616848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.405 [2024-11-26 19:50:06.616869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.405 [2024-11-26 19:50:06.616875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.405 [2024-11-26 19:50:06.620030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.405 [2024-11-26 19:50:06.620052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.405 [2024-11-26 19:50:06.620058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.405 [2024-11-26 19:50:06.623167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.405 [2024-11-26 19:50:06.623188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.405 [2024-11-26 19:50:06.623194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.405 [2024-11-26 19:50:06.626310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.405 [2024-11-26 19:50:06.626331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.405 [2024-11-26 19:50:06.626337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.405 [2024-11-26 19:50:06.629450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.405 [2024-11-26 19:50:06.629473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.405 [2024-11-26 19:50:06.629479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.405 [2024-11-26 19:50:06.632576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.405 [2024-11-26 19:50:06.632598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.405 [2024-11-26 19:50:06.632604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.405 [2024-11-26 19:50:06.635747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.405 [2024-11-26 19:50:06.635778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.405 [2024-11-26 19:50:06.635784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.405 [2024-11-26 19:50:06.638884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.405 [2024-11-26 19:50:06.638905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.405 [2024-11-26 19:50:06.638911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.405 [2024-11-26 19:50:06.642134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.405 [2024-11-26 19:50:06.642156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.405 [2024-11-26 19:50:06.642162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.665 [2024-11-26 19:50:06.645324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.665 [2024-11-26 19:50:06.645345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.665 [2024-11-26 19:50:06.645351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.665 [2024-11-26 19:50:06.648566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.665 [2024-11-26 19:50:06.648588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.665 [2024-11-26 19:50:06.648594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.665 [2024-11-26 19:50:06.651679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.666 [2024-11-26 19:50:06.651701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.666 [2024-11-26 19:50:06.651707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.666 [2024-11-26 19:50:06.654863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.666 [2024-11-26 19:50:06.654883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.666 [2024-11-26 19:50:06.654888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.666 [2024-11-26 19:50:06.658060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.666 [2024-11-26 19:50:06.658081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.666 [2024-11-26 19:50:06.658087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.666 [2024-11-26 19:50:06.661263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.666 [2024-11-26 19:50:06.661286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.666 [2024-11-26 19:50:06.661292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.666 [2024-11-26 19:50:06.664441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.666 [2024-11-26 19:50:06.664463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.666 [2024-11-26 19:50:06.664468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.666 [2024-11-26 19:50:06.667554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.666 [2024-11-26 19:50:06.667575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.666 [2024-11-26 19:50:06.667581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.666 [2024-11-26 19:50:06.670751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.666 [2024-11-26 19:50:06.670781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.666 [2024-11-26 19:50:06.670788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.666 [2024-11-26 19:50:06.673899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.666 [2024-11-26 19:50:06.673921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.666 [2024-11-26 19:50:06.673926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.666 [2024-11-26 19:50:06.677068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.666 [2024-11-26 19:50:06.677089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.666 [2024-11-26 19:50:06.677095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.666 [2024-11-26 19:50:06.680240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.666 [2024-11-26 19:50:06.680262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.666 [2024-11-26 19:50:06.680268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.666 [2024-11-26 19:50:06.683400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.666 [2024-11-26 19:50:06.683422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.666 [2024-11-26 19:50:06.683428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.666 [2024-11-26 19:50:06.686535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.666 [2024-11-26 19:50:06.686557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.666 [2024-11-26 19:50:06.686562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.666 [2024-11-26 19:50:06.689662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.666 [2024-11-26 19:50:06.689684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.666 [2024-11-26 19:50:06.689689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.666 [2024-11-26 19:50:06.692808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.666 [2024-11-26 19:50:06.692827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.666 [2024-11-26 19:50:06.692833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.666 [2024-11-26 19:50:06.695871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.666 [2024-11-26 19:50:06.695892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.666 [2024-11-26 19:50:06.695897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.666 [2024-11-26 19:50:06.699016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.666 [2024-11-26 19:50:06.699037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.666 [2024-11-26 19:50:06.699043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.666 [2024-11-26 19:50:06.702147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.666 [2024-11-26 19:50:06.702169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.666 [2024-11-26 19:50:06.702174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.666 [2024-11-26 19:50:06.705351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.666 [2024-11-26 19:50:06.705373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.666 [2024-11-26 19:50:06.705379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.666 [2024-11-26 19:50:06.708564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.666 [2024-11-26 19:50:06.708586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.666 [2024-11-26 19:50:06.708592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.666 [2024-11-26 19:50:06.711724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.666 [2024-11-26 19:50:06.711747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.666 [2024-11-26 19:50:06.711753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.666 [2024-11-26 19:50:06.714883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.666 [2024-11-26 19:50:06.714904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.666 [2024-11-26 19:50:06.714910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.666 [2024-11-26 19:50:06.717981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.666 [2024-11-26 19:50:06.718005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.666 [2024-11-26 19:50:06.718010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.666 [2024-11-26 19:50:06.721133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.666 [2024-11-26 19:50:06.721155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.666 [2024-11-26 19:50:06.721161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.666 [2024-11-26 19:50:06.724341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.666 [2024-11-26 19:50:06.724364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.666 [2024-11-26 19:50:06.724369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.666 [2024-11-26 19:50:06.727520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.666 [2024-11-26 19:50:06.727541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.666 [2024-11-26 19:50:06.727547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.666 [2024-11-26 19:50:06.730635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.666 [2024-11-26 19:50:06.730657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.666 [2024-11-26 19:50:06.730663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.666 [2024-11-26 19:50:06.733738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.666 [2024-11-26 19:50:06.733760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.666 [2024-11-26 19:50:06.733775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.666 [2024-11-26 19:50:06.736902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.667 [2024-11-26 19:50:06.736923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.667 [2024-11-26 19:50:06.736928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.667 [2024-11-26 19:50:06.740056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.667 [2024-11-26 19:50:06.740078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.667 [2024-11-26 19:50:06.740084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.667 [2024-11-26 19:50:06.743179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.667 [2024-11-26 19:50:06.743199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.667 [2024-11-26 19:50:06.743205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.667 [2024-11-26 19:50:06.746316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.667 [2024-11-26 19:50:06.746336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.667 [2024-11-26 19:50:06.746342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.667 [2024-11-26 19:50:06.749411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.667 [2024-11-26 19:50:06.749433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.667 [2024-11-26 19:50:06.749438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.667 [2024-11-26 19:50:06.752586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.667 [2024-11-26 19:50:06.752609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.667 [2024-11-26 19:50:06.752615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.667 [2024-11-26 19:50:06.755805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.667 [2024-11-26 19:50:06.755825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.667 [2024-11-26 19:50:06.755832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.667 [2024-11-26 19:50:06.758949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.667 [2024-11-26 19:50:06.758969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.667 [2024-11-26 19:50:06.758975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.667 [2024-11-26 19:50:06.762082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.667 [2024-11-26 19:50:06.762105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.667 [2024-11-26 19:50:06.762110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.667 [2024-11-26 19:50:06.765290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.667 [2024-11-26 19:50:06.765313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.667 [2024-11-26 19:50:06.765318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.667 [2024-11-26 19:50:06.768467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.667 [2024-11-26 19:50:06.768488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.667 [2024-11-26 19:50:06.768494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.667 [2024-11-26 19:50:06.771696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.667 [2024-11-26 19:50:06.771718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.667 [2024-11-26 19:50:06.771724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.667 [2024-11-26 19:50:06.774909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.667 [2024-11-26 19:50:06.774930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.667 [2024-11-26 19:50:06.774936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.667 [2024-11-26 19:50:06.778091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.667 [2024-11-26 19:50:06.778113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.667 [2024-11-26 19:50:06.778119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.667 [2024-11-26 19:50:06.781209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.667 [2024-11-26 19:50:06.781231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.667 [2024-11-26 19:50:06.781236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.667 [2024-11-26 19:50:06.784418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.667 [2024-11-26 19:50:06.784440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.667 [2024-11-26 19:50:06.784446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.667 [2024-11-26 19:50:06.787610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.667 [2024-11-26 19:50:06.787632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.667 [2024-11-26 19:50:06.787638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.667 [2024-11-26 19:50:06.790798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.667 [2024-11-26 19:50:06.790818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.667 [2024-11-26 19:50:06.790824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.667 [2024-11-26 19:50:06.793936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.667 [2024-11-26 19:50:06.793957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.667 [2024-11-26 19:50:06.793963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.667 [2024-11-26 19:50:06.797178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.667 [2024-11-26 19:50:06.797199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.667 [2024-11-26 19:50:06.797205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.667 [2024-11-26 19:50:06.800371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.667 [2024-11-26 19:50:06.800393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.667 [2024-11-26 19:50:06.800398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.667 [2024-11-26 19:50:06.803491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.667 [2024-11-26 19:50:06.803513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.667 [2024-11-26 19:50:06.803519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.667 [2024-11-26 19:50:06.806595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.667 [2024-11-26 19:50:06.806616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.667 [2024-11-26 19:50:06.806621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.667 [2024-11-26 19:50:06.809707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.667 [2024-11-26 19:50:06.809729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.667 [2024-11-26 19:50:06.809735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.667 [2024-11-26 19:50:06.812814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.667 [2024-11-26 19:50:06.812835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.667 [2024-11-26 19:50:06.812841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.667 [2024-11-26 19:50:06.815960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.667 [2024-11-26 19:50:06.815982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.667 [2024-11-26 19:50:06.815988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.667 [2024-11-26 19:50:06.819147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.667 [2024-11-26 19:50:06.819169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.667 [2024-11-26 19:50:06.819175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.667 [2024-11-26 19:50:06.822256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.668 [2024-11-26 19:50:06.822278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.668 [2024-11-26 19:50:06.822284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.668 [2024-11-26 19:50:06.825395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.668 [2024-11-26 19:50:06.825416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.668 [2024-11-26 19:50:06.825422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.668 [2024-11-26 19:50:06.828527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.668 [2024-11-26 19:50:06.828549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.668 [2024-11-26 19:50:06.828554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.668 [2024-11-26 19:50:06.831693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.668 [2024-11-26 19:50:06.831715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.668 [2024-11-26 19:50:06.831721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.668 [2024-11-26 19:50:06.834848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.668 [2024-11-26 19:50:06.834869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.668 [2024-11-26 19:50:06.834874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.668 [2024-11-26 19:50:06.838045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.668 [2024-11-26 19:50:06.838067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.668 [2024-11-26 19:50:06.838072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.668 [2024-11-26 19:50:06.841222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.668 [2024-11-26 19:50:06.841243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.668 [2024-11-26 19:50:06.841248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.668 [2024-11-26 19:50:06.844355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.668 [2024-11-26 19:50:06.844376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.668 [2024-11-26 19:50:06.844382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.668 [2024-11-26 19:50:06.847533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.668 [2024-11-26 19:50:06.847554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.668 [2024-11-26 19:50:06.847560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.668 [2024-11-26 19:50:06.850694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.668 [2024-11-26 19:50:06.850716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.668 [2024-11-26 19:50:06.850722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.668 [2024-11-26 19:50:06.853844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.668 [2024-11-26 19:50:06.853864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.668 [2024-11-26 19:50:06.853870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.668 [2024-11-26 19:50:06.856950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.668 [2024-11-26 19:50:06.856972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.668 [2024-11-26 19:50:06.856978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.668 [2024-11-26 19:50:06.860106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.668 [2024-11-26 19:50:06.860129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.668 [2024-11-26 19:50:06.860134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.668 [2024-11-26 19:50:06.863238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.668 [2024-11-26 19:50:06.863259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.668 [2024-11-26 19:50:06.863264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.668 [2024-11-26 19:50:06.866325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.668 [2024-11-26 19:50:06.866346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.668 [2024-11-26 19:50:06.866352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.668 [2024-11-26 19:50:06.869463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.668 [2024-11-26 19:50:06.869485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.668 [2024-11-26 19:50:06.869490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.668 [2024-11-26 19:50:06.872641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.668 [2024-11-26 19:50:06.872662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.668 [2024-11-26 19:50:06.872668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.668 [2024-11-26 19:50:06.875762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.668 [2024-11-26 19:50:06.875792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.668 [2024-11-26 19:50:06.875798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.668 [2024-11-26 19:50:06.878895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.668 [2024-11-26 19:50:06.878915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.668 [2024-11-26 19:50:06.878921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.668 [2024-11-26 19:50:06.881987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.668 [2024-11-26 19:50:06.882007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.668 [2024-11-26 19:50:06.882013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.668 [2024-11-26 19:50:06.885117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.668 [2024-11-26 19:50:06.885138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.668 [2024-11-26 19:50:06.885143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.668 [2024-11-26 19:50:06.888226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.668 [2024-11-26 19:50:06.888246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.668 [2024-11-26 19:50:06.888252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.668 [2024-11-26 19:50:06.891316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.668 [2024-11-26 19:50:06.891337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.668 [2024-11-26 19:50:06.891343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.668 [2024-11-26 19:50:06.894484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.668 [2024-11-26 19:50:06.894505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.668 [2024-11-26 19:50:06.894510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.668 [2024-11-26 19:50:06.897643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.668 [2024-11-26 19:50:06.897664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.668 [2024-11-26 19:50:06.897670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.668 [2024-11-26 19:50:06.900788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.668 [2024-11-26 19:50:06.900808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.668 [2024-11-26 19:50:06.900814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.668 [2024-11-26 19:50:06.903941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.668 [2024-11-26 19:50:06.903961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.668 [2024-11-26 19:50:06.903967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.668 [2024-11-26 19:50:06.907021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.669 [2024-11-26 19:50:06.907042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.669 [2024-11-26 19:50:06.907048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.929 [2024-11-26 19:50:06.910169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.929 [2024-11-26 19:50:06.910190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.929 [2024-11-26 19:50:06.910196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.929 [2024-11-26 19:50:06.913354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.929 [2024-11-26 19:50:06.913376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.929 [2024-11-26 19:50:06.913382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.929 [2024-11-26 19:50:06.916520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.929 [2024-11-26 19:50:06.916541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.929 [2024-11-26 19:50:06.916547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.929 [2024-11-26 19:50:06.919701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.929 [2024-11-26 19:50:06.919724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.929 [2024-11-26 19:50:06.919729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.929 [2024-11-26 19:50:06.922854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.929 [2024-11-26 19:50:06.922874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.929 [2024-11-26 19:50:06.922879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.929 [2024-11-26 19:50:06.925948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.929 [2024-11-26 19:50:06.925969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.929 [2024-11-26 19:50:06.925975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.929 [2024-11-26 19:50:06.929061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.929 [2024-11-26 19:50:06.929083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.929 [2024-11-26 19:50:06.929088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.929 [2024-11-26 19:50:06.932191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.929 [2024-11-26 19:50:06.932213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.929 [2024-11-26 19:50:06.932219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.929 [2024-11-26 19:50:06.935341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.929 [2024-11-26 19:50:06.935363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.929 [2024-11-26 19:50:06.935369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.929 [2024-11-26 19:50:06.938530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.929 [2024-11-26 19:50:06.938550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.929 [2024-11-26 19:50:06.938556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.929 [2024-11-26 19:50:06.941675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.929 [2024-11-26 19:50:06.941696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.929 [2024-11-26 19:50:06.941702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.929 [2024-11-26 19:50:06.944814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.929 [2024-11-26 19:50:06.944835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.929 [2024-11-26 19:50:06.944840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.929 [2024-11-26 19:50:06.947986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.930 [2024-11-26 19:50:06.948007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.930 [2024-11-26 19:50:06.948013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.930 [2024-11-26 19:50:06.951181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.930 [2024-11-26 19:50:06.951202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.930 [2024-11-26 19:50:06.951208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.930 [2024-11-26 19:50:06.954323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.930 [2024-11-26 19:50:06.954345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.930 [2024-11-26 19:50:06.954351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.930 [2024-11-26 19:50:06.957487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.930 [2024-11-26 19:50:06.957509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.930 [2024-11-26 19:50:06.957515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.930 [2024-11-26 19:50:06.960643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.930 [2024-11-26 19:50:06.960665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.930 [2024-11-26 19:50:06.960671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.930 [2024-11-26 19:50:06.963821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.930 [2024-11-26 19:50:06.963841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.930 [2024-11-26 19:50:06.963847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.930 [2024-11-26 19:50:06.966981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.930 [2024-11-26 19:50:06.967002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.930 [2024-11-26 19:50:06.967008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.930 [2024-11-26 19:50:06.970157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.930 [2024-11-26 19:50:06.970179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.930 [2024-11-26 19:50:06.970185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.930 [2024-11-26 19:50:06.973260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.930 [2024-11-26 19:50:06.973282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.930 [2024-11-26 19:50:06.973288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.930 [2024-11-26 19:50:06.976447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.930 [2024-11-26 19:50:06.976469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.930 [2024-11-26 19:50:06.976475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.930 [2024-11-26 19:50:06.979553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.930 [2024-11-26 19:50:06.979576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.930 [2024-11-26 19:50:06.979581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.930 [2024-11-26 19:50:06.982729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.930 [2024-11-26 19:50:06.982750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.930 [2024-11-26 19:50:06.982756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.930 [2024-11-26 19:50:06.985859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.930 [2024-11-26 19:50:06.985880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.930 [2024-11-26 19:50:06.985886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.930 [2024-11-26 19:50:06.989098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.930 [2024-11-26 19:50:06.989119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.930 [2024-11-26 19:50:06.989125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.930 [2024-11-26 19:50:06.992250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.930 [2024-11-26 19:50:06.992272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.930 [2024-11-26 19:50:06.992277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.930 [2024-11-26 19:50:06.995388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.930 [2024-11-26 19:50:06.995410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.930 [2024-11-26 19:50:06.995416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.930 [2024-11-26 19:50:06.998557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.930 [2024-11-26 19:50:06.998578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.930 [2024-11-26 19:50:06.998583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.930 [2024-11-26 19:50:07.001730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.930 [2024-11-26 19:50:07.001752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.930 [2024-11-26 19:50:07.001758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.930 [2024-11-26 19:50:07.004876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.930 [2024-11-26 19:50:07.004896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.930 [2024-11-26 19:50:07.004902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.930 [2024-11-26 19:50:07.008003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.930 [2024-11-26 19:50:07.008024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.930 [2024-11-26 19:50:07.008030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.930 [2024-11-26 19:50:07.011168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.930 [2024-11-26 19:50:07.011189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.930 [2024-11-26 19:50:07.011194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.930 [2024-11-26 19:50:07.014215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.930 [2024-11-26 19:50:07.014237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.930 [2024-11-26 19:50:07.014242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.930 [2024-11-26 19:50:07.017337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.930 [2024-11-26 19:50:07.017358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.930 [2024-11-26 19:50:07.017364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.930 [2024-11-26 19:50:07.020453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.930 [2024-11-26 19:50:07.020474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.930 [2024-11-26 19:50:07.020480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.931 [2024-11-26 19:50:07.023591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.931 [2024-11-26 19:50:07.023613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.931 [2024-11-26 19:50:07.023618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.931 [2024-11-26 19:50:07.026733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.931 [2024-11-26 19:50:07.026754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.931 [2024-11-26 19:50:07.026760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.931 [2024-11-26 19:50:07.029887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.931 [2024-11-26 19:50:07.029908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.931 [2024-11-26 19:50:07.029914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.931 [2024-11-26 19:50:07.033013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.931 [2024-11-26 19:50:07.033034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.931 [2024-11-26 19:50:07.033039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.931 [2024-11-26 19:50:07.036119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.931 [2024-11-26 19:50:07.036140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.931 [2024-11-26 19:50:07.036146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.931 [2024-11-26 19:50:07.039269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.931 [2024-11-26 19:50:07.039289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.931 [2024-11-26 19:50:07.039295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.931 [2024-11-26 19:50:07.042352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.931 [2024-11-26 19:50:07.042374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.931 [2024-11-26 19:50:07.042379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.931 [2024-11-26 19:50:07.045472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.931 [2024-11-26 19:50:07.045494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.931 [2024-11-26 19:50:07.045499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.931 [2024-11-26 19:50:07.048637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.931 [2024-11-26 19:50:07.048659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.931 [2024-11-26 19:50:07.048665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.931 [2024-11-26 19:50:07.051834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.931 [2024-11-26 19:50:07.051855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.931 [2024-11-26 19:50:07.051861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.931 [2024-11-26 19:50:07.054999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.931 [2024-11-26 19:50:07.055020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.931 [2024-11-26 19:50:07.055026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.931 [2024-11-26 19:50:07.058096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.931 [2024-11-26 19:50:07.058117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.931 [2024-11-26 19:50:07.058123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.931 [2024-11-26 19:50:07.061260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.931 [2024-11-26 19:50:07.061282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.931 [2024-11-26 19:50:07.061287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.931 [2024-11-26 19:50:07.064361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.931 [2024-11-26 19:50:07.064383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.931 [2024-11-26 19:50:07.064388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.931 [2024-11-26 19:50:07.067571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.931 [2024-11-26 19:50:07.067592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.931 [2024-11-26 19:50:07.067598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.931 [2024-11-26 19:50:07.070685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.931 [2024-11-26 19:50:07.070706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.931 [2024-11-26 19:50:07.070712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.931 [2024-11-26 19:50:07.073806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.931 [2024-11-26 19:50:07.073826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.931 [2024-11-26 19:50:07.073833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.931 [2024-11-26 19:50:07.076921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.931 [2024-11-26 19:50:07.076942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.931 [2024-11-26 19:50:07.076948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.931 [2024-11-26 19:50:07.080061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.931 [2024-11-26 19:50:07.080083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.931 [2024-11-26 19:50:07.080088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.931 [2024-11-26 19:50:07.083145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.931 [2024-11-26 19:50:07.083166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.931 [2024-11-26 19:50:07.083172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.931 [2024-11-26 19:50:07.086223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.931 [2024-11-26 19:50:07.086244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.931 [2024-11-26 19:50:07.086250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.931 [2024-11-26 19:50:07.089330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.931 [2024-11-26 19:50:07.089351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.931 [2024-11-26 19:50:07.089357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.931 [2024-11-26 19:50:07.092423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.931 [2024-11-26 19:50:07.092444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.931 [2024-11-26 19:50:07.092449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.931 [2024-11-26 19:50:07.095580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.931 [2024-11-26 19:50:07.095601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.931 [2024-11-26 19:50:07.095607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.931 [2024-11-26 19:50:07.098666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.931 [2024-11-26 19:50:07.098689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.931 [2024-11-26 19:50:07.098694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.931 [2024-11-26 19:50:07.101819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.931 [2024-11-26 19:50:07.101839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.931 [2024-11-26 19:50:07.101845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.932 [2024-11-26 19:50:07.104938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.932 [2024-11-26 19:50:07.104959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.932 [2024-11-26 19:50:07.104965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.932 [2024-11-26 19:50:07.108114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.932 [2024-11-26 19:50:07.108136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.932 [2024-11-26 19:50:07.108141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.932 [2024-11-26 19:50:07.111263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.932 [2024-11-26 19:50:07.111285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.932 [2024-11-26 19:50:07.111291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.932 [2024-11-26 19:50:07.114348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.932 [2024-11-26 19:50:07.114369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.932 [2024-11-26 19:50:07.114374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.932 [2024-11-26 19:50:07.117488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.932 [2024-11-26 19:50:07.117509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.932 [2024-11-26 19:50:07.117515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.932 [2024-11-26 19:50:07.120610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.932 [2024-11-26 19:50:07.120631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.932 [2024-11-26 19:50:07.120637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.932 [2024-11-26 19:50:07.123745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.932 [2024-11-26 19:50:07.123777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.932 [2024-11-26 19:50:07.123783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.932 [2024-11-26 19:50:07.126881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.932 [2024-11-26 19:50:07.126901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.932 [2024-11-26 19:50:07.126907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.932 [2024-11-26 19:50:07.130045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.932 [2024-11-26 19:50:07.130067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.932 [2024-11-26 19:50:07.130073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.932 [2024-11-26 19:50:07.133199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.932 [2024-11-26 19:50:07.133221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.932 [2024-11-26 19:50:07.133226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.932 [2024-11-26 19:50:07.136286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.932 [2024-11-26 19:50:07.136308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.932 [2024-11-26 19:50:07.136314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.932 [2024-11-26 19:50:07.139462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.932 [2024-11-26 19:50:07.139484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.932 [2024-11-26 19:50:07.139490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.932 [2024-11-26 19:50:07.142649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.932 [2024-11-26 19:50:07.142671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.932 [2024-11-26 19:50:07.142676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.932 [2024-11-26 19:50:07.145754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.932 [2024-11-26 19:50:07.145787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.932 [2024-11-26 19:50:07.145793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.932 [2024-11-26 19:50:07.148848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.932 [2024-11-26 19:50:07.148869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.932 [2024-11-26 19:50:07.148874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.932 [2024-11-26 19:50:07.151968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.932 [2024-11-26 19:50:07.151988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.932 [2024-11-26 19:50:07.151994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.932 [2024-11-26 19:50:07.155116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.932 [2024-11-26 19:50:07.155137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.932 [2024-11-26 19:50:07.155143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.932 [2024-11-26 19:50:07.158272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.932 [2024-11-26 19:50:07.158294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.932 [2024-11-26 19:50:07.158300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:11.932 [2024-11-26 19:50:07.161416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.932 [2024-11-26 19:50:07.161436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.932 [2024-11-26 19:50:07.161442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:11.932 [2024-11-26 19:50:07.164543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.932 [2024-11-26 19:50:07.164565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.932 [2024-11-26 19:50:07.164571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.932 [2024-11-26 19:50:07.167704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.932 [2024-11-26 19:50:07.167726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.932 [2024-11-26 19:50:07.167732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:11.932 [2024-11-26 19:50:07.170821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:11.932 [2024-11-26 19:50:07.170840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.932 [2024-11-26 19:50:07.170846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:12.193 [2024-11-26 19:50:07.174034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.193 [2024-11-26 19:50:07.174057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.193 [2024-11-26 19:50:07.174062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:12.193 [2024-11-26 19:50:07.177183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.193 [2024-11-26 19:50:07.177205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.193 [2024-11-26 19:50:07.177211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:12.193 [2024-11-26 19:50:07.180312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.193 [2024-11-26 19:50:07.180333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.193 [2024-11-26 19:50:07.180339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:12.193 [2024-11-26 19:50:07.183506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.193 [2024-11-26 19:50:07.183527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.193 [2024-11-26 19:50:07.183533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:12.193 [2024-11-26 19:50:07.186598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.193 [2024-11-26 19:50:07.186620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.193 [2024-11-26 19:50:07.186625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:12.193 [2024-11-26 19:50:07.189703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.193 [2024-11-26 19:50:07.189725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.193 [2024-11-26 19:50:07.189731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:12.193 [2024-11-26 19:50:07.192880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.193 [2024-11-26 19:50:07.192901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.193 [2024-11-26 19:50:07.192907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:12.193 [2024-11-26 19:50:07.196023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.193 [2024-11-26 19:50:07.196045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.193 [2024-11-26 19:50:07.196051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:12.193 [2024-11-26 19:50:07.199117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.193 [2024-11-26 19:50:07.199138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.193 [2024-11-26 19:50:07.199144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:12.193 [2024-11-26 19:50:07.202224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.193 [2024-11-26 19:50:07.202245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.193 [2024-11-26 19:50:07.202250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:12.193 [2024-11-26 19:50:07.205307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.193 [2024-11-26 19:50:07.205329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.193 [2024-11-26 19:50:07.205335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:12.193 [2024-11-26 19:50:07.208456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.193 [2024-11-26 19:50:07.208478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.193 [2024-11-26 19:50:07.208485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:12.193 [2024-11-26 19:50:07.211623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.193 [2024-11-26 19:50:07.211645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.193 [2024-11-26 19:50:07.211650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:12.193 [2024-11-26 19:50:07.214751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.193 [2024-11-26 19:50:07.214781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.193 [2024-11-26 19:50:07.214787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:12.193 [2024-11-26 19:50:07.217874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.193 [2024-11-26 19:50:07.217894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.193 [2024-11-26 19:50:07.217900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:12.193 [2024-11-26 19:50:07.221021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.193 [2024-11-26 19:50:07.221042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.193 [2024-11-26 19:50:07.221048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:12.193 [2024-11-26 19:50:07.224186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.193 [2024-11-26 19:50:07.224207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.193 [2024-11-26 19:50:07.224213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:12.193 [2024-11-26 19:50:07.227311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.193 [2024-11-26 19:50:07.227333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.193 [2024-11-26 19:50:07.227339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:12.194 [2024-11-26 19:50:07.230469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.194 [2024-11-26 19:50:07.230490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.194 [2024-11-26 19:50:07.230496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:12.194 [2024-11-26 19:50:07.233570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.194 [2024-11-26 19:50:07.233593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.194 [2024-11-26 19:50:07.233598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:12.194 [2024-11-26 19:50:07.236744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.194 [2024-11-26 19:50:07.236778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.194 [2024-11-26 19:50:07.236784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:12.194 [2024-11-26 19:50:07.239939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.194 [2024-11-26 19:50:07.239960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.194 [2024-11-26 19:50:07.239965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:12.194 [2024-11-26 19:50:07.243080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.194 [2024-11-26 19:50:07.243103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.194 [2024-11-26 19:50:07.243109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:12.194 [2024-11-26 19:50:07.246221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.194 [2024-11-26 19:50:07.246243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.194 [2024-11-26 19:50:07.246249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:12.194 [2024-11-26 19:50:07.249418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.194 [2024-11-26 19:50:07.249441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.194 [2024-11-26 19:50:07.249446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:12.194 [2024-11-26 19:50:07.252589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.194 [2024-11-26 19:50:07.252610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.194 [2024-11-26 19:50:07.252616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:12.194 [2024-11-26 19:50:07.255716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.194 [2024-11-26 19:50:07.255738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.194 [2024-11-26 19:50:07.255744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:12.194 [2024-11-26 19:50:07.258905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.194 [2024-11-26 19:50:07.258926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.194 [2024-11-26 19:50:07.258932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:12.194 [2024-11-26 19:50:07.262076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.194 [2024-11-26 19:50:07.262097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.194 [2024-11-26 19:50:07.262103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:12.194 [2024-11-26 19:50:07.265167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.194 [2024-11-26 19:50:07.265188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.194 [2024-11-26 19:50:07.265194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:12.194 [2024-11-26 19:50:07.268273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.194 [2024-11-26 19:50:07.268295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.194 [2024-11-26 19:50:07.268300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:12.194 [2024-11-26 19:50:07.271418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.194 [2024-11-26 19:50:07.271440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.194 [2024-11-26 19:50:07.271446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:12.194 [2024-11-26 19:50:07.274550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.194 [2024-11-26 19:50:07.274572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.194 [2024-11-26 19:50:07.274577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:12.194 [2024-11-26 19:50:07.277695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.194 [2024-11-26 19:50:07.277717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.194 [2024-11-26 19:50:07.277722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:12.194 [2024-11-26 19:50:07.280875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.194 [2024-11-26 19:50:07.280896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.194 [2024-11-26 19:50:07.280902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:12.194 [2024-11-26 19:50:07.284003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.194 [2024-11-26 19:50:07.284025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.194 [2024-11-26 19:50:07.284030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:12.194 [2024-11-26 19:50:07.287138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.194 [2024-11-26 19:50:07.287159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.194 [2024-11-26 19:50:07.287164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:12.194 [2024-11-26 19:50:07.290225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.194 [2024-11-26 19:50:07.290246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.194 [2024-11-26 19:50:07.290252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:12.194 [2024-11-26 19:50:07.293308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.194 [2024-11-26 19:50:07.293330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.194 [2024-11-26 19:50:07.293336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:12.194 [2024-11-26 19:50:07.296408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.194 [2024-11-26 19:50:07.296429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.194 [2024-11-26 19:50:07.296435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:12.194 [2024-11-26 19:50:07.299544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.194 [2024-11-26 19:50:07.299566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.194 [2024-11-26 19:50:07.299571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:12.194 [2024-11-26 19:50:07.302681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.194 [2024-11-26 19:50:07.302703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.194 [2024-11-26 19:50:07.302709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:12.194 [2024-11-26 19:50:07.305830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.194 [2024-11-26 19:50:07.305850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.194 [2024-11-26 19:50:07.305856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:12.194 [2024-11-26 19:50:07.308950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.194 [2024-11-26 19:50:07.308971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.194 [2024-11-26 19:50:07.308977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:12.194 [2024-11-26 19:50:07.312056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.194 [2024-11-26 19:50:07.312077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.194 [2024-11-26 19:50:07.312083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:12.195 [2024-11-26 19:50:07.315226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.195 [2024-11-26 19:50:07.315247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.195 [2024-11-26 19:50:07.315253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:12.195 [2024-11-26 19:50:07.318312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.195 [2024-11-26 19:50:07.318334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.195 [2024-11-26 19:50:07.318339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:12.195 [2024-11-26 19:50:07.321457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.195 [2024-11-26 19:50:07.321479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.195 [2024-11-26 19:50:07.321484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:12.195 [2024-11-26 19:50:07.324607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.195 [2024-11-26 19:50:07.324629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.195 [2024-11-26 19:50:07.324635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:12.195 [2024-11-26 19:50:07.327704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.195 [2024-11-26 19:50:07.327726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.195 [2024-11-26 19:50:07.327731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:12.195 [2024-11-26 19:50:07.330889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.195 [2024-11-26 19:50:07.330909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.195 [2024-11-26 19:50:07.330915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:12.195 [2024-11-26 19:50:07.334018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.195 [2024-11-26 19:50:07.334039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.195 [2024-11-26 19:50:07.334045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:12.195 [2024-11-26 19:50:07.337131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.195 [2024-11-26 19:50:07.337152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.195 [2024-11-26 19:50:07.337158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:12.195 [2024-11-26 19:50:07.340298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.195 [2024-11-26 19:50:07.340319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.195 [2024-11-26 19:50:07.340325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:12.195 [2024-11-26 19:50:07.343463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.195 [2024-11-26 19:50:07.343484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.195 [2024-11-26 19:50:07.343490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:12.195 [2024-11-26 19:50:07.346530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.195 [2024-11-26 19:50:07.346551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.195 [2024-11-26 19:50:07.346556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:12.195 [2024-11-26 19:50:07.349697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.195 [2024-11-26 19:50:07.349718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.195 [2024-11-26 19:50:07.349724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:12.195 [2024-11-26 19:50:07.352848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.195 [2024-11-26 19:50:07.352868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.195 [2024-11-26 19:50:07.352874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:12.195 [2024-11-26 19:50:07.356003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.195 [2024-11-26 19:50:07.356025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.195 [2024-11-26 19:50:07.356031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:12.195 [2024-11-26 19:50:07.359161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.195 [2024-11-26 19:50:07.359182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.195 [2024-11-26 19:50:07.359188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:12.195 [2024-11-26 19:50:07.362270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.195 [2024-11-26 19:50:07.362292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.195 [2024-11-26 19:50:07.362298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:12.195 [2024-11-26 19:50:07.365418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.195 [2024-11-26 19:50:07.365440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.195 [2024-11-26 19:50:07.365445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:12.195 [2024-11-26 19:50:07.369844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x189ca80) 00:16:12.195 [2024-11-26 19:50:07.369865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.195 [2024-11-26 19:50:07.369872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:12.195 9765.00 IOPS, 1220.62 MiB/s 00:16:12.195 Latency(us) 00:16:12.195 [2024-11-26T19:50:07.442Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:12.195 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:16:12.195 nvme0n1 : 2.00 9764.66 1220.58 0.00 0.00 1635.97 1487.16 5873.03 00:16:12.195 [2024-11-26T19:50:07.442Z] =================================================================================================================== 00:16:12.195 [2024-11-26T19:50:07.442Z] Total : 9764.66 1220.58 0.00 0.00 1635.97 1487.16 5873.03 00:16:12.195 { 00:16:12.195 "results": [ 00:16:12.195 { 00:16:12.195 "job": "nvme0n1", 00:16:12.195 "core_mask": "0x2", 00:16:12.195 "workload": "randread", 00:16:12.195 "status": "finished", 00:16:12.195 "queue_depth": 16, 00:16:12.195 "io_size": 131072, 00:16:12.195 "runtime": 2.001708, 00:16:12.195 "iops": 9764.660979523487, 00:16:12.195 "mibps": 1220.582622440436, 00:16:12.195 "io_failed": 0, 00:16:12.195 "io_timeout": 0, 00:16:12.195 "avg_latency_us": 1635.9705902447085, 00:16:12.195 "min_latency_us": 1487.163076923077, 00:16:12.195 "max_latency_us": 5873.033846153846 00:16:12.195 } 00:16:12.195 ], 00:16:12.195 "core_count": 1 00:16:12.195 } 00:16:12.195 19:50:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:12.195 19:50:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:12.195 19:50:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:12.195 | .driver_specific 00:16:12.195 | .nvme_error 00:16:12.195 | .status_code 00:16:12.195 | .command_transient_transport_error' 00:16:12.195 19:50:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:12.453 19:50:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 631 > 0 )) 00:16:12.453 19:50:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 78877 00:16:12.453 19:50:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 78877 ']' 00:16:12.453 19:50:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 78877 00:16:12.453 19:50:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:16:12.453 19:50:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:12.453 19:50:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78877 00:16:12.453 19:50:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:12.453 19:50:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:12.453 killing process with pid 78877 00:16:12.453 19:50:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78877' 00:16:12.454 Received shutdown signal, test time was about 2.000000 seconds 00:16:12.454 00:16:12.454 Latency(us) 00:16:12.454 [2024-11-26T19:50:07.701Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:12.454 [2024-11-26T19:50:07.701Z] =================================================================================================================== 00:16:12.454 [2024-11-26T19:50:07.701Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:12.454 19:50:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 78877 00:16:12.454 19:50:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 78877 00:16:12.712 19:50:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:16:12.712 19:50:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:16:12.712 19:50:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:16:12.712 19:50:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:16:12.712 19:50:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:16:12.712 19:50:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=78937 00:16:12.712 19:50:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:16:12.712 19:50:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 78937 /var/tmp/bperf.sock 00:16:12.712 19:50:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 78937 ']' 00:16:12.712 19:50:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:12.712 19:50:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:12.712 19:50:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:12.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:12.712 19:50:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:12.712 19:50:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:12.712 [2024-11-26 19:50:07.793895] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:16:12.712 [2024-11-26 19:50:07.793952] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78937 ] 00:16:12.712 [2024-11-26 19:50:07.927076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.970 [2024-11-26 19:50:07.964319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:12.970 [2024-11-26 19:50:08.002986] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:13.589 19:50:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:13.589 19:50:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:16:13.589 19:50:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:13.589 19:50:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:13.846 19:50:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:13.846 19:50:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.846 19:50:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:13.846 19:50:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.846 19:50:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:13.846 19:50:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:14.102 nvme0n1 00:16:14.102 19:50:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:16:14.102 19:50:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.102 19:50:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:14.102 19:50:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.102 19:50:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:14.102 19:50:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:14.102 Running I/O for 2 seconds... 00:16:14.102 [2024-11-26 19:50:09.253309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016efb048 00:16:14.102 [2024-11-26 19:50:09.254425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.102 [2024-11-26 19:50:09.254462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:14.102 [2024-11-26 19:50:09.265658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016efb8b8 00:16:14.102 [2024-11-26 19:50:09.266748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.102 [2024-11-26 19:50:09.266782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.102 [2024-11-26 19:50:09.277804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016efc128 00:16:14.102 [2024-11-26 19:50:09.278839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:3811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.102 [2024-11-26 19:50:09.278861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:14.102 [2024-11-26 19:50:09.289797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016efc998 00:16:14.102 [2024-11-26 19:50:09.290818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.102 [2024-11-26 19:50:09.290838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:14.102 [2024-11-26 19:50:09.301841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016efd208 00:16:14.102 [2024-11-26 19:50:09.302873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.102 [2024-11-26 19:50:09.302894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:14.102 [2024-11-26 19:50:09.313803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016efda78 00:16:14.102 [2024-11-26 19:50:09.314808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.102 [2024-11-26 19:50:09.314828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:14.102 [2024-11-26 19:50:09.325863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016efe2e8 00:16:14.102 [2024-11-26 19:50:09.326833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.102 [2024-11-26 19:50:09.326853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:14.102 [2024-11-26 19:50:09.337795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016efeb58 00:16:14.102 [2024-11-26 19:50:09.338747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.102 [2024-11-26 19:50:09.338773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:14.360 [2024-11-26 19:50:09.354873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016efef90 00:16:14.361 [2024-11-26 19:50:09.356786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.361 [2024-11-26 19:50:09.356805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:14.361 [2024-11-26 19:50:09.366839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016efeb58 00:16:14.361 [2024-11-26 19:50:09.368692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.361 [2024-11-26 19:50:09.368712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:14.361 [2024-11-26 19:50:09.378877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016efe2e8 00:16:14.361 [2024-11-26 19:50:09.380745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.361 [2024-11-26 19:50:09.380764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:14.361 [2024-11-26 19:50:09.390857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016efda78 00:16:14.361 [2024-11-26 19:50:09.392671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.361 [2024-11-26 19:50:09.392688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:14.361 [2024-11-26 19:50:09.402883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016efd208 00:16:14.361 [2024-11-26 19:50:09.404743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.361 [2024-11-26 19:50:09.404763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:14.361 [2024-11-26 19:50:09.414997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016efc998 00:16:14.361 [2024-11-26 19:50:09.416811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.361 [2024-11-26 19:50:09.416831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:14.361 [2024-11-26 19:50:09.426988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016efc128 00:16:14.361 [2024-11-26 19:50:09.428772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.361 [2024-11-26 19:50:09.428790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:14.361 [2024-11-26 19:50:09.439016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016efb8b8 00:16:14.361 [2024-11-26 19:50:09.440787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.361 [2024-11-26 19:50:09.440806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:14.361 [2024-11-26 19:50:09.450943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016efb048 00:16:14.361 [2024-11-26 19:50:09.452692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.361 [2024-11-26 19:50:09.452712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:14.361 [2024-11-26 19:50:09.462878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016efa7d8 00:16:14.361 [2024-11-26 19:50:09.464609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.361 [2024-11-26 19:50:09.464628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:14.361 [2024-11-26 19:50:09.474898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ef9f68 00:16:14.361 [2024-11-26 19:50:09.476758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.361 [2024-11-26 19:50:09.476795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:14.361 [2024-11-26 19:50:09.487163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ef96f8 00:16:14.361 [2024-11-26 19:50:09.488930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.361 [2024-11-26 19:50:09.488952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:14.361 [2024-11-26 19:50:09.499428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ef8e88 00:16:14.361 [2024-11-26 19:50:09.501162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.361 [2024-11-26 19:50:09.501184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:14.361 [2024-11-26 19:50:09.511501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ef8618 00:16:14.361 [2024-11-26 19:50:09.513202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:6474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.361 [2024-11-26 19:50:09.513222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:14.361 [2024-11-26 19:50:09.523619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ef7da8 00:16:14.361 [2024-11-26 19:50:09.525337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.361 [2024-11-26 19:50:09.525359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:14.361 [2024-11-26 19:50:09.535855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ef7538 00:16:14.361 [2024-11-26 19:50:09.537494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:19765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.361 [2024-11-26 19:50:09.537517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:14.361 [2024-11-26 19:50:09.548080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ef6cc8 00:16:14.361 [2024-11-26 19:50:09.549757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.361 [2024-11-26 19:50:09.549790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:14.361 [2024-11-26 19:50:09.560105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ef6458 00:16:14.361 [2024-11-26 19:50:09.561727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.361 [2024-11-26 19:50:09.561750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:14.361 [2024-11-26 19:50:09.572178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ef5be8 00:16:14.361 [2024-11-26 19:50:09.573791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.361 [2024-11-26 19:50:09.573811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:14.361 [2024-11-26 19:50:09.584272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ef5378 00:16:14.361 [2024-11-26 19:50:09.585896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.361 [2024-11-26 19:50:09.585919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:14.361 [2024-11-26 19:50:09.596528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ef4b08 00:16:14.361 [2024-11-26 19:50:09.598163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:24396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.361 [2024-11-26 19:50:09.598186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:14.620 [2024-11-26 19:50:09.608833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ef4298 00:16:14.620 [2024-11-26 19:50:09.610432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:25568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.620 [2024-11-26 19:50:09.610455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:14.620 [2024-11-26 19:50:09.621122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ef3a28 00:16:14.620 [2024-11-26 19:50:09.622720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:12698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.620 [2024-11-26 19:50:09.622743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:14.620 [2024-11-26 19:50:09.633451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ef31b8 00:16:14.620 [2024-11-26 19:50:09.635036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.620 [2024-11-26 19:50:09.635063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:14.620 [2024-11-26 19:50:09.645705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ef2948 00:16:14.620 [2024-11-26 19:50:09.647300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:18227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.620 [2024-11-26 19:50:09.647324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:14.620 [2024-11-26 19:50:09.657940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ef20d8 00:16:14.620 [2024-11-26 19:50:09.659563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.620 [2024-11-26 19:50:09.659580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:14.620 [2024-11-26 19:50:09.670269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ef1868 00:16:14.620 [2024-11-26 19:50:09.671819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:8176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.620 [2024-11-26 19:50:09.672395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:14.620 [2024-11-26 19:50:09.683252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ef0ff8 00:16:14.620 [2024-11-26 19:50:09.684779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.620 [2024-11-26 19:50:09.684806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:14.620 [2024-11-26 19:50:09.695427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ef0788 00:16:14.620 [2024-11-26 19:50:09.696933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.620 [2024-11-26 19:50:09.696954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:14.620 [2024-11-26 19:50:09.707607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016eeff18 00:16:14.620 [2024-11-26 19:50:09.709101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.620 [2024-11-26 19:50:09.709121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:14.620 [2024-11-26 19:50:09.719733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016eef6a8 00:16:14.620 [2024-11-26 19:50:09.721218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.620 [2024-11-26 19:50:09.721238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:14.620 [2024-11-26 19:50:09.731871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016eeee38 00:16:14.620 [2024-11-26 19:50:09.733342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.620 [2024-11-26 19:50:09.733363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:14.620 [2024-11-26 19:50:09.744038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016eee5c8 00:16:14.620 [2024-11-26 19:50:09.745480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:10966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.620 [2024-11-26 19:50:09.745500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:14.620 [2024-11-26 19:50:09.756164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016eedd58 00:16:14.620 [2024-11-26 19:50:09.757596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.620 [2024-11-26 19:50:09.757617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:14.620 [2024-11-26 19:50:09.768318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016eed4e8 00:16:14.620 [2024-11-26 19:50:09.769736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.620 [2024-11-26 19:50:09.769756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:14.620 [2024-11-26 19:50:09.780461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016eecc78 00:16:14.620 [2024-11-26 19:50:09.781870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.620 [2024-11-26 19:50:09.781889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:14.620 [2024-11-26 19:50:09.792569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016eec408 00:16:14.620 [2024-11-26 19:50:09.793939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.620 [2024-11-26 19:50:09.793958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:14.620 [2024-11-26 19:50:09.804653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016eebb98 00:16:14.620 [2024-11-26 19:50:09.806025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:11589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.620 [2024-11-26 19:50:09.806046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:14.620 [2024-11-26 19:50:09.816583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016eeb328 00:16:14.620 [2024-11-26 19:50:09.817917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:14690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.620 [2024-11-26 19:50:09.817937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:14.620 [2024-11-26 19:50:09.828562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016eeaab8 00:16:14.620 [2024-11-26 19:50:09.829977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.620 [2024-11-26 19:50:09.829996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:14.620 [2024-11-26 19:50:09.840758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016eea248 00:16:14.620 [2024-11-26 19:50:09.842088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.620 [2024-11-26 19:50:09.842107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:14.620 [2024-11-26 19:50:09.852899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ee99d8 00:16:14.620 [2024-11-26 19:50:09.854208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.620 [2024-11-26 19:50:09.854228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:14.880 [2024-11-26 19:50:09.865071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ee9168 00:16:14.880 [2024-11-26 19:50:09.866362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:15376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.880 [2024-11-26 19:50:09.866381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:14.880 [2024-11-26 19:50:09.877212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ee88f8 00:16:14.880 [2024-11-26 19:50:09.878492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.880 [2024-11-26 19:50:09.878512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:14.880 [2024-11-26 19:50:09.889356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ee8088 00:16:14.880 [2024-11-26 19:50:09.890612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.880 [2024-11-26 19:50:09.890633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:14.880 [2024-11-26 19:50:09.901510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ee7818 00:16:14.880 [2024-11-26 19:50:09.902753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.880 [2024-11-26 19:50:09.902858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:14.880 [2024-11-26 19:50:09.913670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ee6fa8 00:16:14.880 [2024-11-26 19:50:09.914902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:15478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.880 [2024-11-26 19:50:09.914922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:14.880 [2024-11-26 19:50:09.925634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ee6738 00:16:14.880 [2024-11-26 19:50:09.926798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.880 [2024-11-26 19:50:09.926818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:14.880 [2024-11-26 19:50:09.937467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ee5ec8 00:16:14.880 [2024-11-26 19:50:09.938666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.880 [2024-11-26 19:50:09.938748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:14.880 [2024-11-26 19:50:09.949648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ee5658 00:16:14.880 [2024-11-26 19:50:09.950849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.880 [2024-11-26 19:50:09.950869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:14.880 [2024-11-26 19:50:09.961738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ee4de8 00:16:14.880 [2024-11-26 19:50:09.962918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.880 [2024-11-26 19:50:09.962939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:14.880 [2024-11-26 19:50:09.973815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ee4578 00:16:14.880 [2024-11-26 19:50:09.974937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.880 [2024-11-26 19:50:09.974957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:14.880 [2024-11-26 19:50:09.985776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ee3d08 00:16:14.880 [2024-11-26 19:50:09.986891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.881 [2024-11-26 19:50:09.986911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:14.881 [2024-11-26 19:50:09.997771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ee3498 00:16:14.881 [2024-11-26 19:50:09.998897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.881 [2024-11-26 19:50:09.998917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:14.881 [2024-11-26 19:50:10.009995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ee2c28 00:16:14.881 [2024-11-26 19:50:10.011163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.881 [2024-11-26 19:50:10.011185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:14.881 [2024-11-26 19:50:10.022341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ee23b8 00:16:14.881 [2024-11-26 19:50:10.023448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.881 [2024-11-26 19:50:10.023468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:14.881 [2024-11-26 19:50:10.034620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ee1b48 00:16:14.881 [2024-11-26 19:50:10.035719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.881 [2024-11-26 19:50:10.035739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:14.881 [2024-11-26 19:50:10.047053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ee12d8 00:16:14.881 [2024-11-26 19:50:10.048210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.881 [2024-11-26 19:50:10.048229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:14.881 [2024-11-26 19:50:10.059267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ee0a68 00:16:14.881 [2024-11-26 19:50:10.060313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.881 [2024-11-26 19:50:10.060333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:14.881 [2024-11-26 19:50:10.071332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ee01f8 00:16:14.881 [2024-11-26 19:50:10.072341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.881 [2024-11-26 19:50:10.072360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:14.881 [2024-11-26 19:50:10.083361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016edf988 00:16:14.881 [2024-11-26 19:50:10.084386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.881 [2024-11-26 19:50:10.084407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:14.881 [2024-11-26 19:50:10.095398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016edf118 00:16:14.881 [2024-11-26 19:50:10.096411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.881 [2024-11-26 19:50:10.096431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:14.881 [2024-11-26 19:50:10.107563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ede8a8 00:16:14.881 [2024-11-26 19:50:10.108567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.881 [2024-11-26 19:50:10.108588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:14.881 [2024-11-26 19:50:10.119700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ede038 00:16:14.881 [2024-11-26 19:50:10.120689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:14.881 [2024-11-26 19:50:10.120710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:15.140 [2024-11-26 19:50:10.137042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ede038 00:16:15.140 [2024-11-26 19:50:10.138958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.140 [2024-11-26 19:50:10.138978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:15.140 [2024-11-26 19:50:10.149413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ede8a8 00:16:15.140 [2024-11-26 19:50:10.151351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:15941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.140 [2024-11-26 19:50:10.151372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:15.140 [2024-11-26 19:50:10.161815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016edf118 00:16:15.140 [2024-11-26 19:50:10.163777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.140 [2024-11-26 19:50:10.163797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:15.140 [2024-11-26 19:50:10.174020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016edf988 00:16:15.140 [2024-11-26 19:50:10.175898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.140 [2024-11-26 19:50:10.175918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:15.140 [2024-11-26 19:50:10.186253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ee01f8 00:16:15.140 [2024-11-26 19:50:10.188119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.140 [2024-11-26 19:50:10.188138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:15.140 [2024-11-26 19:50:10.198411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ee0a68 00:16:15.140 [2024-11-26 19:50:10.200258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.140 [2024-11-26 19:50:10.200277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:15.140 [2024-11-26 19:50:10.210549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ee12d8 00:16:15.140 [2024-11-26 19:50:10.212386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.140 [2024-11-26 19:50:10.212405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:15.140 [2024-11-26 19:50:10.222688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ee1b48 00:16:15.140 [2024-11-26 19:50:10.224510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.140 [2024-11-26 19:50:10.224530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:15.140 [2024-11-26 19:50:10.234862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ee23b8 00:16:15.140 20622.00 IOPS, 80.55 MiB/s [2024-11-26T19:50:10.387Z] [2024-11-26 19:50:10.236663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:8388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.140 [2024-11-26 19:50:10.236680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:15.140 [2024-11-26 19:50:10.247017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ee2c28 00:16:15.140 [2024-11-26 19:50:10.248810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.140 [2024-11-26 19:50:10.248829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:15.140 [2024-11-26 19:50:10.259205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ee3498 00:16:15.140 [2024-11-26 19:50:10.260971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.140 [2024-11-26 19:50:10.260990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:15.140 [2024-11-26 19:50:10.271344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ee3d08 00:16:15.140 [2024-11-26 19:50:10.273099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:9613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.140 [2024-11-26 19:50:10.273117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:15.140 [2024-11-26 19:50:10.283481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ee4578 00:16:15.140 [2024-11-26 19:50:10.285214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.140 [2024-11-26 19:50:10.285233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:15.140 [2024-11-26 19:50:10.295609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ee4de8 00:16:15.140 [2024-11-26 19:50:10.297333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:17379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.140 [2024-11-26 19:50:10.297352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:15.140 [2024-11-26 19:50:10.307740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ee5658 00:16:15.140 [2024-11-26 19:50:10.309440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.140 [2024-11-26 19:50:10.309459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:15.140 [2024-11-26 19:50:10.319895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ee5ec8 00:16:15.140 [2024-11-26 19:50:10.321573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.140 [2024-11-26 19:50:10.321593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:15.140 [2024-11-26 19:50:10.332022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ee6738 00:16:15.140 [2024-11-26 19:50:10.333697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.140 [2024-11-26 19:50:10.333797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:15.140 [2024-11-26 19:50:10.344251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ee6fa8 00:16:15.140 [2024-11-26 19:50:10.345914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:12704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.140 [2024-11-26 19:50:10.345934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:15.141 [2024-11-26 19:50:10.356376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ee7818 00:16:15.141 [2024-11-26 19:50:10.358033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.141 [2024-11-26 19:50:10.358053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:15.141 [2024-11-26 19:50:10.368535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ee8088 00:16:15.141 [2024-11-26 19:50:10.370173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:18283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.141 [2024-11-26 19:50:10.370193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:15.141 [2024-11-26 19:50:10.380638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ee88f8 00:16:15.141 [2024-11-26 19:50:10.382259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:15325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.141 [2024-11-26 19:50:10.382278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:15.398 [2024-11-26 19:50:10.392742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ee9168 00:16:15.398 [2024-11-26 19:50:10.394353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.398 [2024-11-26 19:50:10.394372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:15.398 [2024-11-26 19:50:10.404897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ee99d8 00:16:15.398 [2024-11-26 19:50:10.406551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:12434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.398 [2024-11-26 19:50:10.406570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:15.398 [2024-11-26 19:50:10.417093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016eea248 00:16:15.398 [2024-11-26 19:50:10.418667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:15701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.398 [2024-11-26 19:50:10.418687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:15.398 [2024-11-26 19:50:10.429235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016eeaab8 00:16:15.398 [2024-11-26 19:50:10.430795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:11560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.398 [2024-11-26 19:50:10.430815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:15.398 [2024-11-26 19:50:10.441364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016eeb328 00:16:15.398 [2024-11-26 19:50:10.442896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.398 [2024-11-26 19:50:10.442916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:15.398 [2024-11-26 19:50:10.453463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016eebb98 00:16:15.398 [2024-11-26 19:50:10.454982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.398 [2024-11-26 19:50:10.455002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:15.398 [2024-11-26 19:50:10.465550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016eec408 00:16:15.398 [2024-11-26 19:50:10.467075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:18022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.398 [2024-11-26 19:50:10.467097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:15.398 [2024-11-26 19:50:10.477678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016eecc78 00:16:15.398 [2024-11-26 19:50:10.479193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:24320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.398 [2024-11-26 19:50:10.479213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:15.398 [2024-11-26 19:50:10.489808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016eed4e8 00:16:15.398 [2024-11-26 19:50:10.491300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.398 [2024-11-26 19:50:10.491321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:15.398 [2024-11-26 19:50:10.501950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016eedd58 00:16:15.398 [2024-11-26 19:50:10.503426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.398 [2024-11-26 19:50:10.503446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:15.398 [2024-11-26 19:50:10.514108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016eee5c8 00:16:15.398 [2024-11-26 19:50:10.515562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.398 [2024-11-26 19:50:10.515582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:15.399 [2024-11-26 19:50:10.526200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016eeee38 00:16:15.399 [2024-11-26 19:50:10.527647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.399 [2024-11-26 19:50:10.527669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:15.399 [2024-11-26 19:50:10.538338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016eef6a8 00:16:15.399 [2024-11-26 19:50:10.539782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.399 [2024-11-26 19:50:10.539801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:15.399 [2024-11-26 19:50:10.550454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016eeff18 00:16:15.399 [2024-11-26 19:50:10.551878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.399 [2024-11-26 19:50:10.551897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:15.399 [2024-11-26 19:50:10.562576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ef0788 00:16:15.399 [2024-11-26 19:50:10.563994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.399 [2024-11-26 19:50:10.564013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:15.399 [2024-11-26 19:50:10.574696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ef0ff8 00:16:15.399 [2024-11-26 19:50:10.576097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:15261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.399 [2024-11-26 19:50:10.576115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:15.399 [2024-11-26 19:50:10.586831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ef1868 00:16:15.399 [2024-11-26 19:50:10.588201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:11089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.399 [2024-11-26 19:50:10.588221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:15.399 [2024-11-26 19:50:10.598959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ef20d8 00:16:15.399 [2024-11-26 19:50:10.600312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:9770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.399 [2024-11-26 19:50:10.600331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:15.399 [2024-11-26 19:50:10.611090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ef2948 00:16:15.399 [2024-11-26 19:50:10.612423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.399 [2024-11-26 19:50:10.612443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:15.399 [2024-11-26 19:50:10.623282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ef31b8 00:16:15.399 [2024-11-26 19:50:10.624596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.399 [2024-11-26 19:50:10.624616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:15.399 [2024-11-26 19:50:10.635435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ef3a28 00:16:15.399 [2024-11-26 19:50:10.636734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.399 [2024-11-26 19:50:10.636829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:15.656 [2024-11-26 19:50:10.647643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ef4298 00:16:15.656 [2024-11-26 19:50:10.648935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:20648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.656 [2024-11-26 19:50:10.648954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:15.656 [2024-11-26 19:50:10.659773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ef4b08 00:16:15.656 [2024-11-26 19:50:10.661084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.656 [2024-11-26 19:50:10.661104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:15.656 [2024-11-26 19:50:10.671966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ef5378 00:16:15.656 [2024-11-26 19:50:10.673290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.656 [2024-11-26 19:50:10.673310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:15.656 [2024-11-26 19:50:10.684193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ef5be8 00:16:15.656 [2024-11-26 19:50:10.685422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:7410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.656 [2024-11-26 19:50:10.685442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:15.656 [2024-11-26 19:50:10.696324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ef6458 00:16:15.656 [2024-11-26 19:50:10.697552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.656 [2024-11-26 19:50:10.697571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:15.656 [2024-11-26 19:50:10.708490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ef6cc8 00:16:15.656 [2024-11-26 19:50:10.709707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.656 [2024-11-26 19:50:10.709727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:15.656 [2024-11-26 19:50:10.720667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ef7538 00:16:15.656 [2024-11-26 19:50:10.721872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:24219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.656 [2024-11-26 19:50:10.721892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:15.656 [2024-11-26 19:50:10.732809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ef7da8 00:16:15.656 [2024-11-26 19:50:10.733977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:10655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.656 [2024-11-26 19:50:10.733997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:15.656 [2024-11-26 19:50:10.744909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ef8618 00:16:15.656 [2024-11-26 19:50:10.746065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.656 [2024-11-26 19:50:10.746084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:15.656 [2024-11-26 19:50:10.757042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ef8e88 00:16:15.656 [2024-11-26 19:50:10.758180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.656 [2024-11-26 19:50:10.758200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:15.656 [2024-11-26 19:50:10.769412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ef96f8 00:16:15.656 [2024-11-26 19:50:10.770575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:17397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.656 [2024-11-26 19:50:10.770596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:15.656 [2024-11-26 19:50:10.781816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ef9f68 00:16:15.656 [2024-11-26 19:50:10.782942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.656 [2024-11-26 19:50:10.782962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:15.656 [2024-11-26 19:50:10.793983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016efa7d8 00:16:15.656 [2024-11-26 19:50:10.795096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.656 [2024-11-26 19:50:10.795116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:15.656 [2024-11-26 19:50:10.806129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016efb048 00:16:15.656 [2024-11-26 19:50:10.807230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:14118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.656 [2024-11-26 19:50:10.807250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:15.656 [2024-11-26 19:50:10.818283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016efb8b8 00:16:15.656 [2024-11-26 19:50:10.819372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.656 [2024-11-26 19:50:10.819392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.656 [2024-11-26 19:50:10.830420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016efc128 00:16:15.656 [2024-11-26 19:50:10.831491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.656 [2024-11-26 19:50:10.831509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:15.656 [2024-11-26 19:50:10.842557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016efc998 00:16:15.656 [2024-11-26 19:50:10.843617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.656 [2024-11-26 19:50:10.843638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:15.656 [2024-11-26 19:50:10.854703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016efd208 00:16:15.656 [2024-11-26 19:50:10.855758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.656 [2024-11-26 19:50:10.855851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:15.656 [2024-11-26 19:50:10.866968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016efda78 00:16:15.656 [2024-11-26 19:50:10.867993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.657 [2024-11-26 19:50:10.868014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:15.657 [2024-11-26 19:50:10.879109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016efe2e8 00:16:15.657 [2024-11-26 19:50:10.880116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.657 [2024-11-26 19:50:10.880135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:15.657 [2024-11-26 19:50:10.891239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016efeb58 00:16:15.657 [2024-11-26 19:50:10.892227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.657 [2024-11-26 19:50:10.892247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:15.915 [2024-11-26 19:50:10.908444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016efef90 00:16:15.915 [2024-11-26 19:50:10.910365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.915 [2024-11-26 19:50:10.910385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:15.915 [2024-11-26 19:50:10.920585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016efeb58 00:16:15.915 [2024-11-26 19:50:10.922507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.915 [2024-11-26 19:50:10.922527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:15.915 [2024-11-26 19:50:10.932725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016efe2e8 00:16:15.915 [2024-11-26 19:50:10.934626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.915 [2024-11-26 19:50:10.934646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:15.915 [2024-11-26 19:50:10.944900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016efda78 00:16:15.915 [2024-11-26 19:50:10.946741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.915 [2024-11-26 19:50:10.946834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:15.915 [2024-11-26 19:50:10.957083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016efd208 00:16:15.915 [2024-11-26 19:50:10.958951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.915 [2024-11-26 19:50:10.958972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:15.915 [2024-11-26 19:50:10.969103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016efc998 00:16:15.915 [2024-11-26 19:50:10.970898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.915 [2024-11-26 19:50:10.970918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:15.915 [2024-11-26 19:50:10.981175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016efc128 00:16:15.915 [2024-11-26 19:50:10.983010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:23417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.915 [2024-11-26 19:50:10.983029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:15.915 [2024-11-26 19:50:10.993333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016efb8b8 00:16:15.915 [2024-11-26 19:50:10.995166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.915 [2024-11-26 19:50:10.995186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:15.915 [2024-11-26 19:50:11.005457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016efb048 00:16:15.915 [2024-11-26 19:50:11.007272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.915 [2024-11-26 19:50:11.007291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:15.915 [2024-11-26 19:50:11.017633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016efa7d8 00:16:15.915 [2024-11-26 19:50:11.019438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.915 [2024-11-26 19:50:11.019457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:15.915 [2024-11-26 19:50:11.029761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ef9f68 00:16:15.915 [2024-11-26 19:50:11.031559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.915 [2024-11-26 19:50:11.031579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:15.915 [2024-11-26 19:50:11.041880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ef96f8 00:16:15.915 [2024-11-26 19:50:11.043647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.915 [2024-11-26 19:50:11.043667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:15.915 [2024-11-26 19:50:11.054026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ef8e88 00:16:15.916 [2024-11-26 19:50:11.055786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:18539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.916 [2024-11-26 19:50:11.055806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:15.916 [2024-11-26 19:50:11.066183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ef8618 00:16:15.916 [2024-11-26 19:50:11.067931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.916 [2024-11-26 19:50:11.067950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:15.916 [2024-11-26 19:50:11.078340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ef7da8 00:16:15.916 [2024-11-26 19:50:11.080069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:13784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.916 [2024-11-26 19:50:11.080090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:15.916 [2024-11-26 19:50:11.090485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ef7538 00:16:15.916 [2024-11-26 19:50:11.092198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.916 [2024-11-26 19:50:11.092216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:15.916 [2024-11-26 19:50:11.102595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ef6cc8 00:16:15.916 [2024-11-26 19:50:11.104283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.916 [2024-11-26 19:50:11.104301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:15.916 [2024-11-26 19:50:11.114736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ef6458 00:16:15.916 [2024-11-26 19:50:11.116415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.916 [2024-11-26 19:50:11.116435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:15.916 [2024-11-26 19:50:11.126881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ef5be8 00:16:15.916 [2024-11-26 19:50:11.128543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.916 [2024-11-26 19:50:11.128563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:15.916 [2024-11-26 19:50:11.139017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ef5378 00:16:15.916 [2024-11-26 19:50:11.140634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.916 [2024-11-26 19:50:11.140653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:15.916 [2024-11-26 19:50:11.151150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ef4b08 00:16:15.916 [2024-11-26 19:50:11.152778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:25227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:15.916 [2024-11-26 19:50:11.152798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:16.175 [2024-11-26 19:50:11.163273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ef4298 00:16:16.175 [2024-11-26 19:50:11.164883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.175 [2024-11-26 19:50:11.164902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:16.175 [2024-11-26 19:50:11.175439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ef3a28 00:16:16.175 [2024-11-26 19:50:11.177037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:18341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.175 [2024-11-26 19:50:11.177057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:16.175 [2024-11-26 19:50:11.187575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ef31b8 00:16:16.175 [2024-11-26 19:50:11.189161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.175 [2024-11-26 19:50:11.189181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:16.175 [2024-11-26 19:50:11.199723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ef2948 00:16:16.175 [2024-11-26 19:50:11.201291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:18492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.175 [2024-11-26 19:50:11.201312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.175 [2024-11-26 19:50:11.211886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ef20d8 00:16:16.175 [2024-11-26 19:50:11.213431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.175 [2024-11-26 19:50:11.213451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:16.175 [2024-11-26 19:50:11.224026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ef1868 00:16:16.175 [2024-11-26 19:50:11.225557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:25383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.175 [2024-11-26 19:50:11.225577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:16.175 20747.50 IOPS, 81.04 MiB/s [2024-11-26T19:50:11.422Z] [2024-11-26 19:50:11.236187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf74ae0) with pdu=0x200016ef0ff8 00:16:16.175 [2024-11-26 19:50:11.237680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.175 [2024-11-26 19:50:11.237698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:16.175 00:16:16.175 Latency(us) 00:16:16.175 [2024-11-26T19:50:11.422Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:16.175 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:16.175 nvme0n1 : 2.00 20772.68 81.14 0.00 0.00 6156.84 5671.38 23088.84 00:16:16.175 [2024-11-26T19:50:11.422Z] =================================================================================================================== 00:16:16.175 [2024-11-26T19:50:11.422Z] Total : 20772.68 81.14 0.00 0.00 6156.84 5671.38 23088.84 00:16:16.175 { 00:16:16.175 "results": [ 00:16:16.175 { 00:16:16.175 "job": "nvme0n1", 00:16:16.175 "core_mask": "0x2", 00:16:16.175 "workload": "randwrite", 00:16:16.175 "status": "finished", 00:16:16.175 "queue_depth": 128, 00:16:16.175 "io_size": 4096, 00:16:16.175 "runtime": 2.003738, 00:16:16.175 "iops": 20772.675868801212, 00:16:16.175 "mibps": 81.14326511250474, 00:16:16.175 "io_failed": 0, 00:16:16.175 "io_timeout": 0, 00:16:16.175 "avg_latency_us": 6156.839102641106, 00:16:16.175 "min_latency_us": 5671.384615384615, 00:16:16.175 "max_latency_us": 23088.836923076924 00:16:16.175 } 00:16:16.175 ], 00:16:16.175 "core_count": 1 00:16:16.175 } 00:16:16.175 19:50:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:16.175 19:50:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:16.175 19:50:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:16.175 19:50:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:16.175 | .driver_specific 00:16:16.175 | .nvme_error 00:16:16.175 | .status_code 00:16:16.175 | .command_transient_transport_error' 00:16:16.434 19:50:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 163 > 0 )) 00:16:16.434 19:50:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 78937 00:16:16.434 19:50:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 78937 ']' 00:16:16.434 19:50:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 78937 00:16:16.434 19:50:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:16:16.434 19:50:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:16.434 19:50:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78937 00:16:16.434 killing process with pid 78937 00:16:16.434 Received shutdown signal, test time was about 2.000000 seconds 00:16:16.434 00:16:16.434 Latency(us) 00:16:16.434 [2024-11-26T19:50:11.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:16.434 [2024-11-26T19:50:11.681Z] =================================================================================================================== 00:16:16.434 [2024-11-26T19:50:11.681Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:16.434 19:50:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:16.434 19:50:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:16.434 19:50:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78937' 00:16:16.434 19:50:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 78937 00:16:16.434 19:50:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 78937 00:16:16.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:16.434 19:50:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:16:16.434 19:50:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:16:16.434 19:50:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:16:16.434 19:50:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:16:16.434 19:50:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:16:16.434 19:50:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=78986 00:16:16.434 19:50:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 78986 /var/tmp/bperf.sock 00:16:16.434 19:50:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 78986 ']' 00:16:16.434 19:50:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:16.434 19:50:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:16.434 19:50:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:16.434 19:50:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:16.434 19:50:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:16.434 19:50:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:16:16.434 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:16.434 Zero copy mechanism will not be used. 00:16:16.434 [2024-11-26 19:50:11.622934] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:16:16.434 [2024-11-26 19:50:11.622998] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78986 ] 00:16:16.693 [2024-11-26 19:50:11.751285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.693 [2024-11-26 19:50:11.782887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:16.693 [2024-11-26 19:50:11.812446] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:17.325 19:50:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:17.325 19:50:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:16:17.325 19:50:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:17.325 19:50:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:17.582 19:50:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:17.582 19:50:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.582 19:50:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:17.582 19:50:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.582 19:50:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:17.583 19:50:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:17.841 nvme0n1 00:16:17.841 19:50:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:16:17.841 19:50:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.841 19:50:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:17.841 19:50:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.841 19:50:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:17.841 19:50:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:17.841 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:17.841 Zero copy mechanism will not be used. 00:16:17.841 Running I/O for 2 seconds... 00:16:17.841 [2024-11-26 19:50:13.027127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:17.841 [2024-11-26 19:50:13.027211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.841 [2024-11-26 19:50:13.027232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:17.841 [2024-11-26 19:50:13.030298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:17.841 [2024-11-26 19:50:13.030350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.841 [2024-11-26 19:50:13.030365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:17.841 [2024-11-26 19:50:13.033254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:17.841 [2024-11-26 19:50:13.033395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.841 [2024-11-26 19:50:13.033408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:17.841 [2024-11-26 19:50:13.036306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:17.841 [2024-11-26 19:50:13.036361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.841 [2024-11-26 19:50:13.036373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:17.841 [2024-11-26 19:50:13.039238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:17.841 [2024-11-26 19:50:13.039285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.841 [2024-11-26 19:50:13.039297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:17.841 [2024-11-26 19:50:13.042161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:17.841 [2024-11-26 19:50:13.042211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.841 [2024-11-26 19:50:13.042222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:17.841 [2024-11-26 19:50:13.045074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:17.841 [2024-11-26 19:50:13.045123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.841 [2024-11-26 19:50:13.045135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:17.841 [2024-11-26 19:50:13.047982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:17.841 [2024-11-26 19:50:13.048035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.841 [2024-11-26 19:50:13.048047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:17.841 [2024-11-26 19:50:13.050877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:17.841 [2024-11-26 19:50:13.050933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.841 [2024-11-26 19:50:13.050944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:17.841 [2024-11-26 19:50:13.053836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:17.841 [2024-11-26 19:50:13.053892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.841 [2024-11-26 19:50:13.053904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:17.841 [2024-11-26 19:50:13.056763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:17.841 [2024-11-26 19:50:13.056825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.841 [2024-11-26 19:50:13.056837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:17.841 [2024-11-26 19:50:13.059723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:17.841 [2024-11-26 19:50:13.059859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.841 [2024-11-26 19:50:13.059871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:17.841 [2024-11-26 19:50:13.062738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:17.841 [2024-11-26 19:50:13.062803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.841 [2024-11-26 19:50:13.062815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:17.841 [2024-11-26 19:50:13.065680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:17.841 [2024-11-26 19:50:13.065736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.841 [2024-11-26 19:50:13.065748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:17.842 [2024-11-26 19:50:13.068645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:17.842 [2024-11-26 19:50:13.068697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.842 [2024-11-26 19:50:13.068709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:17.842 [2024-11-26 19:50:13.071627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:17.842 [2024-11-26 19:50:13.071735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.842 [2024-11-26 19:50:13.071747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:17.842 [2024-11-26 19:50:13.074677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:17.842 [2024-11-26 19:50:13.074731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.842 [2024-11-26 19:50:13.074742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:17.842 [2024-11-26 19:50:13.077707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:17.842 [2024-11-26 19:50:13.077762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.842 [2024-11-26 19:50:13.077784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:17.842 [2024-11-26 19:50:13.080700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:17.842 [2024-11-26 19:50:13.080757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.842 [2024-11-26 19:50:13.080781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:17.842 [2024-11-26 19:50:13.083717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:17.842 [2024-11-26 19:50:13.083840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.842 [2024-11-26 19:50:13.083852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.102 [2024-11-26 19:50:13.086781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.102 [2024-11-26 19:50:13.086839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.102 [2024-11-26 19:50:13.086850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.102 [2024-11-26 19:50:13.089735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.102 [2024-11-26 19:50:13.089801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.102 [2024-11-26 19:50:13.089813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.102 [2024-11-26 19:50:13.092649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.102 [2024-11-26 19:50:13.092705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.102 [2024-11-26 19:50:13.092717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.102 [2024-11-26 19:50:13.095590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.102 [2024-11-26 19:50:13.095704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.102 [2024-11-26 19:50:13.095715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.102 [2024-11-26 19:50:13.098598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.102 [2024-11-26 19:50:13.098648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.102 [2024-11-26 19:50:13.098660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.102 [2024-11-26 19:50:13.101528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.102 [2024-11-26 19:50:13.101584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.102 [2024-11-26 19:50:13.101596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.102 [2024-11-26 19:50:13.104491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.102 [2024-11-26 19:50:13.104546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.102 [2024-11-26 19:50:13.104558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.102 [2024-11-26 19:50:13.107424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.102 [2024-11-26 19:50:13.107536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.102 [2024-11-26 19:50:13.107548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.102 [2024-11-26 19:50:13.110424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.102 [2024-11-26 19:50:13.110480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.102 [2024-11-26 19:50:13.110492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.102 [2024-11-26 19:50:13.113372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.102 [2024-11-26 19:50:13.113427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.102 [2024-11-26 19:50:13.113439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.102 [2024-11-26 19:50:13.116259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.102 [2024-11-26 19:50:13.116311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.102 [2024-11-26 19:50:13.116323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.102 [2024-11-26 19:50:13.119108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.102 [2024-11-26 19:50:13.119204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.102 [2024-11-26 19:50:13.119215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.102 [2024-11-26 19:50:13.122017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.102 [2024-11-26 19:50:13.122065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.103 [2024-11-26 19:50:13.122077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.103 [2024-11-26 19:50:13.124866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.103 [2024-11-26 19:50:13.124921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.103 [2024-11-26 19:50:13.124932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.103 [2024-11-26 19:50:13.127727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.103 [2024-11-26 19:50:13.127787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.103 [2024-11-26 19:50:13.127799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.103 [2024-11-26 19:50:13.130569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.103 [2024-11-26 19:50:13.130665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.103 [2024-11-26 19:50:13.130677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.103 [2024-11-26 19:50:13.133526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.103 [2024-11-26 19:50:13.133574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.103 [2024-11-26 19:50:13.133586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.103 [2024-11-26 19:50:13.136421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.103 [2024-11-26 19:50:13.136473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.103 [2024-11-26 19:50:13.136485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.103 [2024-11-26 19:50:13.139282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.103 [2024-11-26 19:50:13.139337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.103 [2024-11-26 19:50:13.139348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.103 [2024-11-26 19:50:13.142150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.103 [2024-11-26 19:50:13.142245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.103 [2024-11-26 19:50:13.142256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.103 [2024-11-26 19:50:13.145101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.103 [2024-11-26 19:50:13.145156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.103 [2024-11-26 19:50:13.145168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.103 [2024-11-26 19:50:13.148027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.103 [2024-11-26 19:50:13.148082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.103 [2024-11-26 19:50:13.148094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.103 [2024-11-26 19:50:13.150931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.103 [2024-11-26 19:50:13.150985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.103 [2024-11-26 19:50:13.150997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.103 [2024-11-26 19:50:13.153822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.103 [2024-11-26 19:50:13.153877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.103 [2024-11-26 19:50:13.153889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.103 [2024-11-26 19:50:13.156746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.103 [2024-11-26 19:50:13.156809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.103 [2024-11-26 19:50:13.156821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.103 [2024-11-26 19:50:13.159666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.103 [2024-11-26 19:50:13.159720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.103 [2024-11-26 19:50:13.159731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.103 [2024-11-26 19:50:13.162590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.103 [2024-11-26 19:50:13.162646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.103 [2024-11-26 19:50:13.162658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.103 [2024-11-26 19:50:13.165531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.103 [2024-11-26 19:50:13.165632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.103 [2024-11-26 19:50:13.165644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.103 [2024-11-26 19:50:13.168577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.103 [2024-11-26 19:50:13.168628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.103 [2024-11-26 19:50:13.168640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.103 [2024-11-26 19:50:13.171510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.103 [2024-11-26 19:50:13.171566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.103 [2024-11-26 19:50:13.171577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.103 [2024-11-26 19:50:13.174433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.103 [2024-11-26 19:50:13.174490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.103 [2024-11-26 19:50:13.174502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.103 [2024-11-26 19:50:13.177384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.103 [2024-11-26 19:50:13.177483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.103 [2024-11-26 19:50:13.177495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.103 [2024-11-26 19:50:13.180385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.103 [2024-11-26 19:50:13.180431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.103 [2024-11-26 19:50:13.180442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.103 [2024-11-26 19:50:13.183314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.103 [2024-11-26 19:50:13.183370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.103 [2024-11-26 19:50:13.183382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.103 [2024-11-26 19:50:13.186237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.103 [2024-11-26 19:50:13.186291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.103 [2024-11-26 19:50:13.186303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.103 [2024-11-26 19:50:13.189177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.103 [2024-11-26 19:50:13.189242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.103 [2024-11-26 19:50:13.189253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.103 [2024-11-26 19:50:13.192111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.103 [2024-11-26 19:50:13.192211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.103 [2024-11-26 19:50:13.192223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.103 [2024-11-26 19:50:13.195100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.103 [2024-11-26 19:50:13.195159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.103 [2024-11-26 19:50:13.195171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.103 [2024-11-26 19:50:13.198032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.103 [2024-11-26 19:50:13.198087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.103 [2024-11-26 19:50:13.198099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.103 [2024-11-26 19:50:13.200946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.104 [2024-11-26 19:50:13.201002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.104 [2024-11-26 19:50:13.201014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.104 [2024-11-26 19:50:13.203869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.104 [2024-11-26 19:50:13.203926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.104 [2024-11-26 19:50:13.203938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.104 [2024-11-26 19:50:13.206801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.104 [2024-11-26 19:50:13.206856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.104 [2024-11-26 19:50:13.206867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.104 [2024-11-26 19:50:13.209741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.104 [2024-11-26 19:50:13.209806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.104 [2024-11-26 19:50:13.209818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.104 [2024-11-26 19:50:13.212652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.104 [2024-11-26 19:50:13.212707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.104 [2024-11-26 19:50:13.212719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.104 [2024-11-26 19:50:13.215602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.104 [2024-11-26 19:50:13.215715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.104 [2024-11-26 19:50:13.215727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.104 [2024-11-26 19:50:13.218621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.104 [2024-11-26 19:50:13.218676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.104 [2024-11-26 19:50:13.218688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.104 [2024-11-26 19:50:13.221566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.104 [2024-11-26 19:50:13.221622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.104 [2024-11-26 19:50:13.221634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.104 [2024-11-26 19:50:13.224442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.104 [2024-11-26 19:50:13.224496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.104 [2024-11-26 19:50:13.224507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.104 [2024-11-26 19:50:13.227317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.104 [2024-11-26 19:50:13.227416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.104 [2024-11-26 19:50:13.227428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.104 [2024-11-26 19:50:13.230332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.104 [2024-11-26 19:50:13.230388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.104 [2024-11-26 19:50:13.230400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.104 [2024-11-26 19:50:13.233263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.104 [2024-11-26 19:50:13.233312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.104 [2024-11-26 19:50:13.233323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.104 [2024-11-26 19:50:13.236164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.104 [2024-11-26 19:50:13.236220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.104 [2024-11-26 19:50:13.236232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.104 [2024-11-26 19:50:13.239144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.104 [2024-11-26 19:50:13.239182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.104 [2024-11-26 19:50:13.239193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.104 [2024-11-26 19:50:13.242043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.104 [2024-11-26 19:50:13.242154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.104 [2024-11-26 19:50:13.242166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.104 [2024-11-26 19:50:13.245048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.104 [2024-11-26 19:50:13.245103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.104 [2024-11-26 19:50:13.245115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.104 [2024-11-26 19:50:13.247978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.104 [2024-11-26 19:50:13.248034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.104 [2024-11-26 19:50:13.248046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.104 [2024-11-26 19:50:13.250881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.104 [2024-11-26 19:50:13.250938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.104 [2024-11-26 19:50:13.250950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.104 [2024-11-26 19:50:13.253834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.104 [2024-11-26 19:50:13.253891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.104 [2024-11-26 19:50:13.253903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.104 [2024-11-26 19:50:13.256789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.104 [2024-11-26 19:50:13.256840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.104 [2024-11-26 19:50:13.256852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.104 [2024-11-26 19:50:13.259748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.104 [2024-11-26 19:50:13.259809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.104 [2024-11-26 19:50:13.259821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.104 [2024-11-26 19:50:13.262717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.104 [2024-11-26 19:50:13.262787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.104 [2024-11-26 19:50:13.262800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.104 [2024-11-26 19:50:13.265696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.104 [2024-11-26 19:50:13.265818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.104 [2024-11-26 19:50:13.265830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.104 [2024-11-26 19:50:13.268786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.104 [2024-11-26 19:50:13.268843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.104 [2024-11-26 19:50:13.268856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.104 [2024-11-26 19:50:13.271779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.104 [2024-11-26 19:50:13.271835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.104 [2024-11-26 19:50:13.271848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.104 [2024-11-26 19:50:13.274747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.104 [2024-11-26 19:50:13.274813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.104 [2024-11-26 19:50:13.274825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.104 [2024-11-26 19:50:13.277718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.104 [2024-11-26 19:50:13.277838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.104 [2024-11-26 19:50:13.277850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.104 [2024-11-26 19:50:13.280731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.104 [2024-11-26 19:50:13.280792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.105 [2024-11-26 19:50:13.280804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.105 [2024-11-26 19:50:13.283652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.105 [2024-11-26 19:50:13.283707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.105 [2024-11-26 19:50:13.283719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.105 [2024-11-26 19:50:13.286570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.105 [2024-11-26 19:50:13.286627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.105 [2024-11-26 19:50:13.286639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.105 [2024-11-26 19:50:13.289526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.105 [2024-11-26 19:50:13.289628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.105 [2024-11-26 19:50:13.289640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.105 [2024-11-26 19:50:13.292568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.105 [2024-11-26 19:50:13.292624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.105 [2024-11-26 19:50:13.292636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.105 [2024-11-26 19:50:13.295515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.105 [2024-11-26 19:50:13.295570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.105 [2024-11-26 19:50:13.295582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.105 [2024-11-26 19:50:13.298442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.105 [2024-11-26 19:50:13.298498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.105 [2024-11-26 19:50:13.298509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.105 [2024-11-26 19:50:13.301385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.105 [2024-11-26 19:50:13.301496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.105 [2024-11-26 19:50:13.301508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.105 [2024-11-26 19:50:13.304397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.105 [2024-11-26 19:50:13.304452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.105 [2024-11-26 19:50:13.304464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.105 [2024-11-26 19:50:13.307328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.105 [2024-11-26 19:50:13.307383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.105 [2024-11-26 19:50:13.307395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.105 [2024-11-26 19:50:13.310254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.105 [2024-11-26 19:50:13.310307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.105 [2024-11-26 19:50:13.310319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.105 [2024-11-26 19:50:13.313187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.105 [2024-11-26 19:50:13.313243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.105 [2024-11-26 19:50:13.313255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.105 [2024-11-26 19:50:13.316120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.105 [2024-11-26 19:50:13.316229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.105 [2024-11-26 19:50:13.316241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.105 [2024-11-26 19:50:13.319128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.105 [2024-11-26 19:50:13.319184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.105 [2024-11-26 19:50:13.319196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.105 [2024-11-26 19:50:13.322048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.105 [2024-11-26 19:50:13.322095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.105 [2024-11-26 19:50:13.322107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.105 [2024-11-26 19:50:13.324971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.105 [2024-11-26 19:50:13.325028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.105 [2024-11-26 19:50:13.325040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.105 [2024-11-26 19:50:13.327933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.105 [2024-11-26 19:50:13.327989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.105 [2024-11-26 19:50:13.328000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.105 [2024-11-26 19:50:13.330846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.105 [2024-11-26 19:50:13.330902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.105 [2024-11-26 19:50:13.330914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.105 [2024-11-26 19:50:13.333780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.105 [2024-11-26 19:50:13.333836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.105 [2024-11-26 19:50:13.333849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.105 [2024-11-26 19:50:13.336702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.105 [2024-11-26 19:50:13.336752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.105 [2024-11-26 19:50:13.336774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.105 [2024-11-26 19:50:13.339646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.105 [2024-11-26 19:50:13.339747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.105 [2024-11-26 19:50:13.339758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.105 [2024-11-26 19:50:13.342639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.105 [2024-11-26 19:50:13.342694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.105 [2024-11-26 19:50:13.342705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.365 [2024-11-26 19:50:13.345569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.365 [2024-11-26 19:50:13.345625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.365 [2024-11-26 19:50:13.345637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.365 [2024-11-26 19:50:13.348539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.365 [2024-11-26 19:50:13.348596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.365 [2024-11-26 19:50:13.348608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.365 [2024-11-26 19:50:13.351522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.365 [2024-11-26 19:50:13.351579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.365 [2024-11-26 19:50:13.351592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.365 [2024-11-26 19:50:13.354500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.365 [2024-11-26 19:50:13.354603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.365 [2024-11-26 19:50:13.354615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.365 [2024-11-26 19:50:13.357559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.365 [2024-11-26 19:50:13.357619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.365 [2024-11-26 19:50:13.357631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.365 [2024-11-26 19:50:13.360572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.365 [2024-11-26 19:50:13.360629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.365 [2024-11-26 19:50:13.360642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.365 [2024-11-26 19:50:13.363565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.365 [2024-11-26 19:50:13.363622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.365 [2024-11-26 19:50:13.363635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.365 [2024-11-26 19:50:13.366544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.366 [2024-11-26 19:50:13.366658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.366 [2024-11-26 19:50:13.366670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.366 [2024-11-26 19:50:13.369565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.366 [2024-11-26 19:50:13.369620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.366 [2024-11-26 19:50:13.369632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.366 [2024-11-26 19:50:13.372546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.366 [2024-11-26 19:50:13.372602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.366 [2024-11-26 19:50:13.372614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.366 [2024-11-26 19:50:13.375483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.366 [2024-11-26 19:50:13.375538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.366 [2024-11-26 19:50:13.375550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.366 [2024-11-26 19:50:13.378423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.366 [2024-11-26 19:50:13.378532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.366 [2024-11-26 19:50:13.378544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.366 [2024-11-26 19:50:13.381438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.366 [2024-11-26 19:50:13.381492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.366 [2024-11-26 19:50:13.381503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.366 [2024-11-26 19:50:13.384362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.366 [2024-11-26 19:50:13.384417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.366 [2024-11-26 19:50:13.384428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.366 [2024-11-26 19:50:13.387323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.366 [2024-11-26 19:50:13.387379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.366 [2024-11-26 19:50:13.387391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.366 [2024-11-26 19:50:13.390257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.366 [2024-11-26 19:50:13.390366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.366 [2024-11-26 19:50:13.390377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.366 [2024-11-26 19:50:13.393251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.366 [2024-11-26 19:50:13.393306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.366 [2024-11-26 19:50:13.393317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.366 [2024-11-26 19:50:13.396211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.366 [2024-11-26 19:50:13.396266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.366 [2024-11-26 19:50:13.396277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.366 [2024-11-26 19:50:13.399153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.366 [2024-11-26 19:50:13.399209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.366 [2024-11-26 19:50:13.399221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.366 [2024-11-26 19:50:13.402070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.366 [2024-11-26 19:50:13.402126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.366 [2024-11-26 19:50:13.402138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.366 [2024-11-26 19:50:13.405008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.366 [2024-11-26 19:50:13.405064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.366 [2024-11-26 19:50:13.405076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.366 [2024-11-26 19:50:13.407965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.366 [2024-11-26 19:50:13.408019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.366 [2024-11-26 19:50:13.408031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.366 [2024-11-26 19:50:13.410876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.366 [2024-11-26 19:50:13.410932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.366 [2024-11-26 19:50:13.410944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.366 [2024-11-26 19:50:13.413807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.366 [2024-11-26 19:50:13.413859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.366 [2024-11-26 19:50:13.413871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.366 [2024-11-26 19:50:13.416726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.366 [2024-11-26 19:50:13.416852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.366 [2024-11-26 19:50:13.416863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.366 [2024-11-26 19:50:13.419758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.366 [2024-11-26 19:50:13.419823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.366 [2024-11-26 19:50:13.419834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.366 [2024-11-26 19:50:13.422673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.366 [2024-11-26 19:50:13.422729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.366 [2024-11-26 19:50:13.422741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.366 [2024-11-26 19:50:13.425577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.366 [2024-11-26 19:50:13.425628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.366 [2024-11-26 19:50:13.425640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.366 [2024-11-26 19:50:13.428523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.366 [2024-11-26 19:50:13.428624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.366 [2024-11-26 19:50:13.428636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.366 [2024-11-26 19:50:13.431488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.366 [2024-11-26 19:50:13.431541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.366 [2024-11-26 19:50:13.431553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.366 [2024-11-26 19:50:13.434310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.366 [2024-11-26 19:50:13.434364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.367 [2024-11-26 19:50:13.434375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.367 [2024-11-26 19:50:13.437233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.367 [2024-11-26 19:50:13.437288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.367 [2024-11-26 19:50:13.437300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.367 [2024-11-26 19:50:13.440186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.367 [2024-11-26 19:50:13.440296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.367 [2024-11-26 19:50:13.440308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.367 [2024-11-26 19:50:13.443213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.367 [2024-11-26 19:50:13.443265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.367 [2024-11-26 19:50:13.443276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.367 [2024-11-26 19:50:13.446134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.367 [2024-11-26 19:50:13.446188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.367 [2024-11-26 19:50:13.446199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.367 [2024-11-26 19:50:13.449072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.367 [2024-11-26 19:50:13.449134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.367 [2024-11-26 19:50:13.449146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.367 [2024-11-26 19:50:13.451994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.367 [2024-11-26 19:50:13.452042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.367 [2024-11-26 19:50:13.452054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.367 [2024-11-26 19:50:13.454918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.367 [2024-11-26 19:50:13.454974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.367 [2024-11-26 19:50:13.454985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.367 [2024-11-26 19:50:13.457855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.367 [2024-11-26 19:50:13.457910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.367 [2024-11-26 19:50:13.457922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.367 [2024-11-26 19:50:13.460826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.367 [2024-11-26 19:50:13.460881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.367 [2024-11-26 19:50:13.460893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.367 [2024-11-26 19:50:13.463733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.367 [2024-11-26 19:50:13.463798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.367 [2024-11-26 19:50:13.463810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.367 [2024-11-26 19:50:13.466637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.367 [2024-11-26 19:50:13.466737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.367 [2024-11-26 19:50:13.466749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.367 [2024-11-26 19:50:13.469659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.367 [2024-11-26 19:50:13.469714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.367 [2024-11-26 19:50:13.469726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.367 [2024-11-26 19:50:13.472609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.367 [2024-11-26 19:50:13.472659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.367 [2024-11-26 19:50:13.472670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.367 [2024-11-26 19:50:13.475567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.367 [2024-11-26 19:50:13.475608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.367 [2024-11-26 19:50:13.475620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.367 [2024-11-26 19:50:13.478511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.367 [2024-11-26 19:50:13.478625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.367 [2024-11-26 19:50:13.478637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.367 [2024-11-26 19:50:13.481590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.367 [2024-11-26 19:50:13.481643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.367 [2024-11-26 19:50:13.481655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.367 [2024-11-26 19:50:13.484585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.367 [2024-11-26 19:50:13.484637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.367 [2024-11-26 19:50:13.484649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.367 [2024-11-26 19:50:13.487558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.367 [2024-11-26 19:50:13.487625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.367 [2024-11-26 19:50:13.487638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.367 [2024-11-26 19:50:13.490556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.367 [2024-11-26 19:50:13.490671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.367 [2024-11-26 19:50:13.490683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.367 [2024-11-26 19:50:13.493573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.367 [2024-11-26 19:50:13.493624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.367 [2024-11-26 19:50:13.493636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.367 [2024-11-26 19:50:13.496515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.367 [2024-11-26 19:50:13.496571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.367 [2024-11-26 19:50:13.496583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.367 [2024-11-26 19:50:13.499481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.367 [2024-11-26 19:50:13.499532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.367 [2024-11-26 19:50:13.499544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.367 [2024-11-26 19:50:13.502408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.367 [2024-11-26 19:50:13.502519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.367 [2024-11-26 19:50:13.502531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.367 [2024-11-26 19:50:13.505465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.367 [2024-11-26 19:50:13.505521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.367 [2024-11-26 19:50:13.505533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.367 [2024-11-26 19:50:13.508412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.367 [2024-11-26 19:50:13.508462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.367 [2024-11-26 19:50:13.508474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.367 [2024-11-26 19:50:13.511313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.367 [2024-11-26 19:50:13.511367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.367 [2024-11-26 19:50:13.511378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.367 [2024-11-26 19:50:13.514265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.367 [2024-11-26 19:50:13.514364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.368 [2024-11-26 19:50:13.514376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.368 [2024-11-26 19:50:13.517314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.368 [2024-11-26 19:50:13.517365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.368 [2024-11-26 19:50:13.517377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.368 [2024-11-26 19:50:13.520233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.368 [2024-11-26 19:50:13.520287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.368 [2024-11-26 19:50:13.520299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.368 [2024-11-26 19:50:13.523160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.368 [2024-11-26 19:50:13.523218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.368 [2024-11-26 19:50:13.523229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.368 [2024-11-26 19:50:13.526070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.368 [2024-11-26 19:50:13.526126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.368 [2024-11-26 19:50:13.526138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.368 [2024-11-26 19:50:13.529021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.368 [2024-11-26 19:50:13.529077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.368 [2024-11-26 19:50:13.529089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.368 [2024-11-26 19:50:13.531963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.368 [2024-11-26 19:50:13.532018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.368 [2024-11-26 19:50:13.532030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.368 [2024-11-26 19:50:13.534868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.368 [2024-11-26 19:50:13.534924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.368 [2024-11-26 19:50:13.534936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.368 [2024-11-26 19:50:13.537804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.368 [2024-11-26 19:50:13.537851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.368 [2024-11-26 19:50:13.537863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.368 [2024-11-26 19:50:13.540727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.368 [2024-11-26 19:50:13.540851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.368 [2024-11-26 19:50:13.540863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.368 [2024-11-26 19:50:13.543717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.368 [2024-11-26 19:50:13.543782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.368 [2024-11-26 19:50:13.543794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.368 [2024-11-26 19:50:13.546638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.368 [2024-11-26 19:50:13.546688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.368 [2024-11-26 19:50:13.546700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.368 [2024-11-26 19:50:13.549588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.368 [2024-11-26 19:50:13.549641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.368 [2024-11-26 19:50:13.549653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.368 [2024-11-26 19:50:13.552530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.368 [2024-11-26 19:50:13.552640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.368 [2024-11-26 19:50:13.552652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.368 [2024-11-26 19:50:13.555526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.368 [2024-11-26 19:50:13.555580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.368 [2024-11-26 19:50:13.555592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.368 [2024-11-26 19:50:13.558450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.368 [2024-11-26 19:50:13.558504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.368 [2024-11-26 19:50:13.558516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.368 [2024-11-26 19:50:13.561394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.368 [2024-11-26 19:50:13.561449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.368 [2024-11-26 19:50:13.561461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.368 [2024-11-26 19:50:13.564324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.368 [2024-11-26 19:50:13.564425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.368 [2024-11-26 19:50:13.564437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.368 [2024-11-26 19:50:13.567340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.368 [2024-11-26 19:50:13.567391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.368 [2024-11-26 19:50:13.567403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.368 [2024-11-26 19:50:13.570265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.368 [2024-11-26 19:50:13.570320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.368 [2024-11-26 19:50:13.570332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.368 [2024-11-26 19:50:13.573199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.368 [2024-11-26 19:50:13.573254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.368 [2024-11-26 19:50:13.573266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.368 [2024-11-26 19:50:13.576160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.368 [2024-11-26 19:50:13.576223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.368 [2024-11-26 19:50:13.576235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.368 [2024-11-26 19:50:13.579097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.368 [2024-11-26 19:50:13.579217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.368 [2024-11-26 19:50:13.579229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.368 [2024-11-26 19:50:13.582146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.368 [2024-11-26 19:50:13.582200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.368 [2024-11-26 19:50:13.582212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.368 [2024-11-26 19:50:13.585121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.368 [2024-11-26 19:50:13.585174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.369 [2024-11-26 19:50:13.585187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.369 [2024-11-26 19:50:13.588113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.369 [2024-11-26 19:50:13.588168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.369 [2024-11-26 19:50:13.588180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.369 [2024-11-26 19:50:13.591069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.369 [2024-11-26 19:50:13.591131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.369 [2024-11-26 19:50:13.591143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.369 [2024-11-26 19:50:13.593997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.369 [2024-11-26 19:50:13.594052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.369 [2024-11-26 19:50:13.594064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.369 [2024-11-26 19:50:13.596937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.369 [2024-11-26 19:50:13.596992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.369 [2024-11-26 19:50:13.597004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.369 [2024-11-26 19:50:13.599886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.369 [2024-11-26 19:50:13.599941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.369 [2024-11-26 19:50:13.599952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.369 [2024-11-26 19:50:13.602802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.369 [2024-11-26 19:50:13.602851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.369 [2024-11-26 19:50:13.602863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.629 [2024-11-26 19:50:13.605728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.629 [2024-11-26 19:50:13.605851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.629 [2024-11-26 19:50:13.605862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.629 [2024-11-26 19:50:13.608753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.629 [2024-11-26 19:50:13.608817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.629 [2024-11-26 19:50:13.608830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.629 [2024-11-26 19:50:13.611651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.629 [2024-11-26 19:50:13.611706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.629 [2024-11-26 19:50:13.611717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.629 [2024-11-26 19:50:13.614567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.629 [2024-11-26 19:50:13.614617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.629 [2024-11-26 19:50:13.614629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.629 [2024-11-26 19:50:13.617479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.629 [2024-11-26 19:50:13.617578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.629 [2024-11-26 19:50:13.617591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.629 [2024-11-26 19:50:13.620513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.629 [2024-11-26 19:50:13.620569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.629 [2024-11-26 19:50:13.620580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.629 [2024-11-26 19:50:13.623462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.629 [2024-11-26 19:50:13.623517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.629 [2024-11-26 19:50:13.623529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.629 [2024-11-26 19:50:13.626422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.629 [2024-11-26 19:50:13.626476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.629 [2024-11-26 19:50:13.626488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.629 [2024-11-26 19:50:13.629420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.629 [2024-11-26 19:50:13.629537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.629 [2024-11-26 19:50:13.629549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.629 [2024-11-26 19:50:13.632461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.629 [2024-11-26 19:50:13.632527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.629 [2024-11-26 19:50:13.632539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.629 [2024-11-26 19:50:13.635430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.629 [2024-11-26 19:50:13.635485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.629 [2024-11-26 19:50:13.635497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.629 [2024-11-26 19:50:13.638362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.629 [2024-11-26 19:50:13.638418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.629 [2024-11-26 19:50:13.638430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.629 [2024-11-26 19:50:13.641309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.629 [2024-11-26 19:50:13.641408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.629 [2024-11-26 19:50:13.641420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.629 [2024-11-26 19:50:13.644326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.629 [2024-11-26 19:50:13.644382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.629 [2024-11-26 19:50:13.644394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.629 [2024-11-26 19:50:13.647242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.629 [2024-11-26 19:50:13.647297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.629 [2024-11-26 19:50:13.647308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.629 [2024-11-26 19:50:13.650160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.630 [2024-11-26 19:50:13.650216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.630 [2024-11-26 19:50:13.650228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.630 [2024-11-26 19:50:13.653098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.630 [2024-11-26 19:50:13.653153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.630 [2024-11-26 19:50:13.653165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.630 [2024-11-26 19:50:13.656045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.630 [2024-11-26 19:50:13.656102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.630 [2024-11-26 19:50:13.656114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.630 [2024-11-26 19:50:13.658977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.630 [2024-11-26 19:50:13.659030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.630 [2024-11-26 19:50:13.659042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.630 [2024-11-26 19:50:13.661923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.630 [2024-11-26 19:50:13.661979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.630 [2024-11-26 19:50:13.661991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.630 [2024-11-26 19:50:13.664850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.630 [2024-11-26 19:50:13.664907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.630 [2024-11-26 19:50:13.664918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.630 [2024-11-26 19:50:13.667791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.630 [2024-11-26 19:50:13.667842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.630 [2024-11-26 19:50:13.667854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.630 [2024-11-26 19:50:13.670726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.630 [2024-11-26 19:50:13.670792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.630 [2024-11-26 19:50:13.670804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.630 [2024-11-26 19:50:13.673655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.630 [2024-11-26 19:50:13.673711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.630 [2024-11-26 19:50:13.673722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.630 [2024-11-26 19:50:13.676586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.630 [2024-11-26 19:50:13.676637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.630 [2024-11-26 19:50:13.676648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.630 [2024-11-26 19:50:13.679529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.630 [2024-11-26 19:50:13.679630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.630 [2024-11-26 19:50:13.679642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.630 [2024-11-26 19:50:13.682525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.630 [2024-11-26 19:50:13.682581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.630 [2024-11-26 19:50:13.682593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.630 [2024-11-26 19:50:13.685502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.630 [2024-11-26 19:50:13.685551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.630 [2024-11-26 19:50:13.685563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.630 [2024-11-26 19:50:13.688438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.630 [2024-11-26 19:50:13.688487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.630 [2024-11-26 19:50:13.688499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.630 [2024-11-26 19:50:13.691372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.630 [2024-11-26 19:50:13.691482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.630 [2024-11-26 19:50:13.691494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.630 [2024-11-26 19:50:13.694362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.630 [2024-11-26 19:50:13.694412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.630 [2024-11-26 19:50:13.694423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.630 [2024-11-26 19:50:13.697296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.630 [2024-11-26 19:50:13.697346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.630 [2024-11-26 19:50:13.697358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.630 [2024-11-26 19:50:13.700254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.630 [2024-11-26 19:50:13.700310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.630 [2024-11-26 19:50:13.700322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.630 [2024-11-26 19:50:13.703192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.630 [2024-11-26 19:50:13.703247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.630 [2024-11-26 19:50:13.703259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.630 [2024-11-26 19:50:13.706124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.630 [2024-11-26 19:50:13.706226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.630 [2024-11-26 19:50:13.706238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.630 [2024-11-26 19:50:13.709141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.630 [2024-11-26 19:50:13.709196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.630 [2024-11-26 19:50:13.709208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.630 [2024-11-26 19:50:13.712030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.630 [2024-11-26 19:50:13.712085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.630 [2024-11-26 19:50:13.712097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.630 [2024-11-26 19:50:13.714909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.630 [2024-11-26 19:50:13.714963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.630 [2024-11-26 19:50:13.714975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.630 [2024-11-26 19:50:13.717795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.630 [2024-11-26 19:50:13.717850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.630 [2024-11-26 19:50:13.717861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.630 [2024-11-26 19:50:13.720659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.630 [2024-11-26 19:50:13.720713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.630 [2024-11-26 19:50:13.720724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.630 [2024-11-26 19:50:13.723564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.630 [2024-11-26 19:50:13.723619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.630 [2024-11-26 19:50:13.723631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.630 [2024-11-26 19:50:13.726495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.630 [2024-11-26 19:50:13.726551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.630 [2024-11-26 19:50:13.726564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.630 [2024-11-26 19:50:13.729467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.631 [2024-11-26 19:50:13.729581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.631 [2024-11-26 19:50:13.729593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.631 [2024-11-26 19:50:13.732614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.631 [2024-11-26 19:50:13.732730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.631 [2024-11-26 19:50:13.732854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.631 [2024-11-26 19:50:13.735624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.631 [2024-11-26 19:50:13.735728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.631 [2024-11-26 19:50:13.735833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.631 [2024-11-26 19:50:13.738623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.631 [2024-11-26 19:50:13.738724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.631 [2024-11-26 19:50:13.738830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.631 [2024-11-26 19:50:13.741645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.631 [2024-11-26 19:50:13.741761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.631 [2024-11-26 19:50:13.741897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.631 [2024-11-26 19:50:13.744643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.631 [2024-11-26 19:50:13.744758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.631 [2024-11-26 19:50:13.744842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.631 [2024-11-26 19:50:13.747749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.631 [2024-11-26 19:50:13.747877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.631 [2024-11-26 19:50:13.747959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.631 [2024-11-26 19:50:13.750786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.631 [2024-11-26 19:50:13.750887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.631 [2024-11-26 19:50:13.750965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.631 [2024-11-26 19:50:13.753774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.631 [2024-11-26 19:50:13.753876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.631 [2024-11-26 19:50:13.753947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.631 [2024-11-26 19:50:13.756781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.631 [2024-11-26 19:50:13.756896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.631 [2024-11-26 19:50:13.757014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.631 [2024-11-26 19:50:13.759837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.631 [2024-11-26 19:50:13.759952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.631 [2024-11-26 19:50:13.760029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.631 [2024-11-26 19:50:13.762856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.631 [2024-11-26 19:50:13.762957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.631 [2024-11-26 19:50:13.763036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.631 [2024-11-26 19:50:13.765850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.631 [2024-11-26 19:50:13.765968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.631 [2024-11-26 19:50:13.766045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.631 [2024-11-26 19:50:13.768861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.631 [2024-11-26 19:50:13.768980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.631 [2024-11-26 19:50:13.769058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.631 [2024-11-26 19:50:13.771882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.631 [2024-11-26 19:50:13.771983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.631 [2024-11-26 19:50:13.772056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.631 [2024-11-26 19:50:13.774861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.631 [2024-11-26 19:50:13.774966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.631 [2024-11-26 19:50:13.774979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.631 [2024-11-26 19:50:13.777858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.631 [2024-11-26 19:50:13.777915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.631 [2024-11-26 19:50:13.777927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.631 [2024-11-26 19:50:13.780802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.631 [2024-11-26 19:50:13.780857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.631 [2024-11-26 19:50:13.780870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.631 [2024-11-26 19:50:13.783786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.631 [2024-11-26 19:50:13.783842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.631 [2024-11-26 19:50:13.783854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.631 [2024-11-26 19:50:13.786748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.631 [2024-11-26 19:50:13.786815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.631 [2024-11-26 19:50:13.786828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.631 [2024-11-26 19:50:13.789743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.631 [2024-11-26 19:50:13.789807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.631 [2024-11-26 19:50:13.789819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.631 [2024-11-26 19:50:13.792736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.631 [2024-11-26 19:50:13.792864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.631 [2024-11-26 19:50:13.792877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.631 [2024-11-26 19:50:13.795864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.631 [2024-11-26 19:50:13.795923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.631 [2024-11-26 19:50:13.795936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.631 [2024-11-26 19:50:13.798883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.631 [2024-11-26 19:50:13.798940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.631 [2024-11-26 19:50:13.798952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.631 [2024-11-26 19:50:13.801917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.631 [2024-11-26 19:50:13.801973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.631 [2024-11-26 19:50:13.801985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.631 [2024-11-26 19:50:13.804899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.631 [2024-11-26 19:50:13.804955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.631 [2024-11-26 19:50:13.804967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.631 [2024-11-26 19:50:13.807839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.631 [2024-11-26 19:50:13.807894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.631 [2024-11-26 19:50:13.807906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.631 [2024-11-26 19:50:13.810783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.632 [2024-11-26 19:50:13.810831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.632 [2024-11-26 19:50:13.810842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.632 [2024-11-26 19:50:13.813692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.632 [2024-11-26 19:50:13.813748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.632 [2024-11-26 19:50:13.813760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.632 [2024-11-26 19:50:13.816636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.632 [2024-11-26 19:50:13.816692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.632 [2024-11-26 19:50:13.816703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.632 [2024-11-26 19:50:13.819566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.632 [2024-11-26 19:50:13.819686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.632 [2024-11-26 19:50:13.819698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.632 [2024-11-26 19:50:13.822558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.632 [2024-11-26 19:50:13.822614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.632 [2024-11-26 19:50:13.822625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.632 [2024-11-26 19:50:13.825496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.632 [2024-11-26 19:50:13.825552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.632 [2024-11-26 19:50:13.825564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.632 [2024-11-26 19:50:13.828467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.632 [2024-11-26 19:50:13.828523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.632 [2024-11-26 19:50:13.828534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.632 [2024-11-26 19:50:13.831423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.632 [2024-11-26 19:50:13.831531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.632 [2024-11-26 19:50:13.831543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.632 [2024-11-26 19:50:13.834402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.632 [2024-11-26 19:50:13.834456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.632 [2024-11-26 19:50:13.834468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.632 [2024-11-26 19:50:13.837309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.632 [2024-11-26 19:50:13.837358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.632 [2024-11-26 19:50:13.837370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.632 [2024-11-26 19:50:13.840260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.632 [2024-11-26 19:50:13.840314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.632 [2024-11-26 19:50:13.840326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.632 [2024-11-26 19:50:13.843223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.632 [2024-11-26 19:50:13.843266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.632 [2024-11-26 19:50:13.843278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.632 [2024-11-26 19:50:13.846171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.632 [2024-11-26 19:50:13.846285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.632 [2024-11-26 19:50:13.846297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.632 [2024-11-26 19:50:13.849220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.632 [2024-11-26 19:50:13.849278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.632 [2024-11-26 19:50:13.849289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.632 [2024-11-26 19:50:13.852222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.632 [2024-11-26 19:50:13.852274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.632 [2024-11-26 19:50:13.852287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.632 [2024-11-26 19:50:13.855183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.632 [2024-11-26 19:50:13.855241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.632 [2024-11-26 19:50:13.855253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.632 [2024-11-26 19:50:13.858139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.632 [2024-11-26 19:50:13.858189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.632 [2024-11-26 19:50:13.858201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.632 [2024-11-26 19:50:13.861067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.632 [2024-11-26 19:50:13.861174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.632 [2024-11-26 19:50:13.861185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.632 [2024-11-26 19:50:13.864103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.632 [2024-11-26 19:50:13.864158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.632 [2024-11-26 19:50:13.864170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.632 [2024-11-26 19:50:13.867036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.632 [2024-11-26 19:50:13.867105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.632 [2024-11-26 19:50:13.867117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.632 [2024-11-26 19:50:13.869975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.632 [2024-11-26 19:50:13.870028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.632 [2024-11-26 19:50:13.870039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.894 [2024-11-26 19:50:13.872908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.894 [2024-11-26 19:50:13.872964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.894 [2024-11-26 19:50:13.872976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.894 [2024-11-26 19:50:13.875831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.894 [2024-11-26 19:50:13.875886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.894 [2024-11-26 19:50:13.875898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.894 [2024-11-26 19:50:13.878725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.894 [2024-11-26 19:50:13.878791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.894 [2024-11-26 19:50:13.878804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.894 [2024-11-26 19:50:13.881657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.894 [2024-11-26 19:50:13.881712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.894 [2024-11-26 19:50:13.881724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.894 [2024-11-26 19:50:13.884575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.894 [2024-11-26 19:50:13.884689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.894 [2024-11-26 19:50:13.884701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.894 [2024-11-26 19:50:13.887603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.894 [2024-11-26 19:50:13.887650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.894 [2024-11-26 19:50:13.887662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.894 [2024-11-26 19:50:13.890519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.894 [2024-11-26 19:50:13.890574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.894 [2024-11-26 19:50:13.890585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.894 [2024-11-26 19:50:13.893473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.894 [2024-11-26 19:50:13.893528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.894 [2024-11-26 19:50:13.893540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.894 [2024-11-26 19:50:13.896404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.894 [2024-11-26 19:50:13.896515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.894 [2024-11-26 19:50:13.896527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.894 [2024-11-26 19:50:13.899413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.894 [2024-11-26 19:50:13.899467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.894 [2024-11-26 19:50:13.899479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.894 [2024-11-26 19:50:13.902309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.894 [2024-11-26 19:50:13.902358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.894 [2024-11-26 19:50:13.902370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.894 [2024-11-26 19:50:13.905187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.894 [2024-11-26 19:50:13.905239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.894 [2024-11-26 19:50:13.905250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.894 [2024-11-26 19:50:13.908107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.894 [2024-11-26 19:50:13.908223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.894 [2024-11-26 19:50:13.908234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.894 [2024-11-26 19:50:13.911138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.894 [2024-11-26 19:50:13.911195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.894 [2024-11-26 19:50:13.911207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.894 [2024-11-26 19:50:13.914057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.894 [2024-11-26 19:50:13.914112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.894 [2024-11-26 19:50:13.914124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.894 [2024-11-26 19:50:13.916985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.894 [2024-11-26 19:50:13.917041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.894 [2024-11-26 19:50:13.917052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.894 [2024-11-26 19:50:13.919903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.894 [2024-11-26 19:50:13.919951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.894 [2024-11-26 19:50:13.919963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.894 [2024-11-26 19:50:13.922833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.894 [2024-11-26 19:50:13.922888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.894 [2024-11-26 19:50:13.922899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.894 [2024-11-26 19:50:13.925761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.894 [2024-11-26 19:50:13.925820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.894 [2024-11-26 19:50:13.925831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.894 [2024-11-26 19:50:13.928687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.894 [2024-11-26 19:50:13.928742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.894 [2024-11-26 19:50:13.928754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.894 [2024-11-26 19:50:13.931585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.894 [2024-11-26 19:50:13.931700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.894 [2024-11-26 19:50:13.931711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.894 [2024-11-26 19:50:13.934629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.894 [2024-11-26 19:50:13.934684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.894 [2024-11-26 19:50:13.934696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.894 [2024-11-26 19:50:13.937592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.894 [2024-11-26 19:50:13.937647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.894 [2024-11-26 19:50:13.937659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.894 [2024-11-26 19:50:13.940553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.894 [2024-11-26 19:50:13.940603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.894 [2024-11-26 19:50:13.940614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.894 [2024-11-26 19:50:13.943511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.895 [2024-11-26 19:50:13.943611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.895 [2024-11-26 19:50:13.943623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.895 [2024-11-26 19:50:13.946521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.895 [2024-11-26 19:50:13.946585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.895 [2024-11-26 19:50:13.946597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.895 [2024-11-26 19:50:13.949438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.895 [2024-11-26 19:50:13.949493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.895 [2024-11-26 19:50:13.949504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.895 [2024-11-26 19:50:13.952388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.895 [2024-11-26 19:50:13.952443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.895 [2024-11-26 19:50:13.952454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.895 [2024-11-26 19:50:13.955317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.895 [2024-11-26 19:50:13.955417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.895 [2024-11-26 19:50:13.955429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.895 [2024-11-26 19:50:13.958326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.895 [2024-11-26 19:50:13.958380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.895 [2024-11-26 19:50:13.958392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.895 [2024-11-26 19:50:13.961281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.895 [2024-11-26 19:50:13.961335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.895 [2024-11-26 19:50:13.961347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.895 [2024-11-26 19:50:13.964222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.895 [2024-11-26 19:50:13.964278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.895 [2024-11-26 19:50:13.964290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.895 [2024-11-26 19:50:13.967159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.895 [2024-11-26 19:50:13.967214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.895 [2024-11-26 19:50:13.967226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.895 [2024-11-26 19:50:13.970075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.895 [2024-11-26 19:50:13.970172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.895 [2024-11-26 19:50:13.970184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.895 [2024-11-26 19:50:13.973100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.895 [2024-11-26 19:50:13.973151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.895 [2024-11-26 19:50:13.973163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.895 [2024-11-26 19:50:13.976062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.895 [2024-11-26 19:50:13.976100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.895 [2024-11-26 19:50:13.976112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.895 [2024-11-26 19:50:13.979044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.895 [2024-11-26 19:50:13.979105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.895 [2024-11-26 19:50:13.979117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.895 [2024-11-26 19:50:13.981994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.895 [2024-11-26 19:50:13.982049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.895 [2024-11-26 19:50:13.982061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.895 [2024-11-26 19:50:13.984936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.895 [2024-11-26 19:50:13.984991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.895 [2024-11-26 19:50:13.985003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.895 [2024-11-26 19:50:13.987860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.895 [2024-11-26 19:50:13.987916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.895 [2024-11-26 19:50:13.987928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.895 [2024-11-26 19:50:13.990780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.895 [2024-11-26 19:50:13.990843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.895 [2024-11-26 19:50:13.990855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.895 [2024-11-26 19:50:13.993688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.895 [2024-11-26 19:50:13.993743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.895 [2024-11-26 19:50:13.993754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.895 [2024-11-26 19:50:13.996617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.895 [2024-11-26 19:50:13.996731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.895 [2024-11-26 19:50:13.996743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.895 [2024-11-26 19:50:13.999619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.895 [2024-11-26 19:50:13.999672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.895 [2024-11-26 19:50:13.999684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.895 [2024-11-26 19:50:14.002552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.895 [2024-11-26 19:50:14.002606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.895 [2024-11-26 19:50:14.002618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.895 [2024-11-26 19:50:14.005495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.895 [2024-11-26 19:50:14.005553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.895 [2024-11-26 19:50:14.005564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.895 [2024-11-26 19:50:14.008470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.895 [2024-11-26 19:50:14.008577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.895 [2024-11-26 19:50:14.008588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.895 [2024-11-26 19:50:14.011528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.895 [2024-11-26 19:50:14.011580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.895 [2024-11-26 19:50:14.011592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.895 [2024-11-26 19:50:14.014454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.895 [2024-11-26 19:50:14.014510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.895 [2024-11-26 19:50:14.014522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.895 [2024-11-26 19:50:14.017370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.895 [2024-11-26 19:50:14.017426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.895 [2024-11-26 19:50:14.017438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.895 10386.00 IOPS, 1298.25 MiB/s [2024-11-26T19:50:14.142Z] [2024-11-26 19:50:14.021353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.895 [2024-11-26 19:50:14.021410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.895 [2024-11-26 19:50:14.021422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.895 [2024-11-26 19:50:14.024310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.896 [2024-11-26 19:50:14.024422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.896 [2024-11-26 19:50:14.024433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.896 [2024-11-26 19:50:14.027316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.896 [2024-11-26 19:50:14.027371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.896 [2024-11-26 19:50:14.027383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.896 [2024-11-26 19:50:14.030236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.896 [2024-11-26 19:50:14.030291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.896 [2024-11-26 19:50:14.030303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.896 [2024-11-26 19:50:14.033175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.896 [2024-11-26 19:50:14.033229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.896 [2024-11-26 19:50:14.033241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.896 [2024-11-26 19:50:14.036092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.896 [2024-11-26 19:50:14.036146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.896 [2024-11-26 19:50:14.036158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.896 [2024-11-26 19:50:14.039009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.896 [2024-11-26 19:50:14.039082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.896 [2024-11-26 19:50:14.039093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.896 [2024-11-26 19:50:14.041937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.896 [2024-11-26 19:50:14.041991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.896 [2024-11-26 19:50:14.042003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.896 [2024-11-26 19:50:14.044849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.896 [2024-11-26 19:50:14.044905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.896 [2024-11-26 19:50:14.044916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.896 [2024-11-26 19:50:14.047762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.896 [2024-11-26 19:50:14.047827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.896 [2024-11-26 19:50:14.047839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.896 [2024-11-26 19:50:14.050690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.896 [2024-11-26 19:50:14.050814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.896 [2024-11-26 19:50:14.050826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.896 [2024-11-26 19:50:14.053678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.896 [2024-11-26 19:50:14.053733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.896 [2024-11-26 19:50:14.053744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.896 [2024-11-26 19:50:14.056625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.896 [2024-11-26 19:50:14.056675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.896 [2024-11-26 19:50:14.056688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.896 [2024-11-26 19:50:14.059551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.896 [2024-11-26 19:50:14.059606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.896 [2024-11-26 19:50:14.059618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.896 [2024-11-26 19:50:14.062519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.896 [2024-11-26 19:50:14.062634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.896 [2024-11-26 19:50:14.062646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.896 [2024-11-26 19:50:14.065584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.896 [2024-11-26 19:50:14.065631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.896 [2024-11-26 19:50:14.065643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.896 [2024-11-26 19:50:14.068538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.896 [2024-11-26 19:50:14.068593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.896 [2024-11-26 19:50:14.068605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.896 [2024-11-26 19:50:14.071487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.896 [2024-11-26 19:50:14.071541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.896 [2024-11-26 19:50:14.071552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.896 [2024-11-26 19:50:14.074416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.896 [2024-11-26 19:50:14.074540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.896 [2024-11-26 19:50:14.074552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.896 [2024-11-26 19:50:14.077422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.896 [2024-11-26 19:50:14.077478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.896 [2024-11-26 19:50:14.077489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.896 [2024-11-26 19:50:14.080354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.896 [2024-11-26 19:50:14.080409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.896 [2024-11-26 19:50:14.080421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.896 [2024-11-26 19:50:14.083334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.896 [2024-11-26 19:50:14.083390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.896 [2024-11-26 19:50:14.083401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.896 [2024-11-26 19:50:14.086270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.896 [2024-11-26 19:50:14.086368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.896 [2024-11-26 19:50:14.086380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.896 [2024-11-26 19:50:14.089242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.896 [2024-11-26 19:50:14.089296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.896 [2024-11-26 19:50:14.089307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.896 [2024-11-26 19:50:14.092203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.896 [2024-11-26 19:50:14.092260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.896 [2024-11-26 19:50:14.092272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.896 [2024-11-26 19:50:14.095139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.896 [2024-11-26 19:50:14.095215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.896 [2024-11-26 19:50:14.095227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.896 [2024-11-26 19:50:14.098106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.896 [2024-11-26 19:50:14.098161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.896 [2024-11-26 19:50:14.098173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.896 [2024-11-26 19:50:14.101053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.896 [2024-11-26 19:50:14.101163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.896 [2024-11-26 19:50:14.101174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.896 [2024-11-26 19:50:14.104068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.896 [2024-11-26 19:50:14.104124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.897 [2024-11-26 19:50:14.104135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.897 [2024-11-26 19:50:14.107022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.897 [2024-11-26 19:50:14.107087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.897 [2024-11-26 19:50:14.107099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.897 [2024-11-26 19:50:14.109936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.897 [2024-11-26 19:50:14.109990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.897 [2024-11-26 19:50:14.110002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.897 [2024-11-26 19:50:14.112911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.897 [2024-11-26 19:50:14.112967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.897 [2024-11-26 19:50:14.112978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.897 [2024-11-26 19:50:14.115859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.897 [2024-11-26 19:50:14.115917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.897 [2024-11-26 19:50:14.115929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.897 [2024-11-26 19:50:14.118789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.897 [2024-11-26 19:50:14.118838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.897 [2024-11-26 19:50:14.118850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.897 [2024-11-26 19:50:14.121704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.897 [2024-11-26 19:50:14.121760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.897 [2024-11-26 19:50:14.121783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.897 [2024-11-26 19:50:14.124658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.897 [2024-11-26 19:50:14.124709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.897 [2024-11-26 19:50:14.124720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:18.897 [2024-11-26 19:50:14.127644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.897 [2024-11-26 19:50:14.127748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.897 [2024-11-26 19:50:14.127760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:18.897 [2024-11-26 19:50:14.130648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.897 [2024-11-26 19:50:14.130705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.897 [2024-11-26 19:50:14.130717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.897 [2024-11-26 19:50:14.133602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.897 [2024-11-26 19:50:14.133660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.897 [2024-11-26 19:50:14.133672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:18.897 [2024-11-26 19:50:14.136557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:18.897 [2024-11-26 19:50:14.136614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:18.897 [2024-11-26 19:50:14.136626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.156 [2024-11-26 19:50:14.139501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.156 [2024-11-26 19:50:14.139603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.156 [2024-11-26 19:50:14.139614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.156 [2024-11-26 19:50:14.142506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.156 [2024-11-26 19:50:14.142562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.156 [2024-11-26 19:50:14.142574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.156 [2024-11-26 19:50:14.145467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.156 [2024-11-26 19:50:14.145523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.156 [2024-11-26 19:50:14.145534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.156 [2024-11-26 19:50:14.148410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.156 [2024-11-26 19:50:14.148465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.156 [2024-11-26 19:50:14.148476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.156 [2024-11-26 19:50:14.151348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.156 [2024-11-26 19:50:14.151457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.156 [2024-11-26 19:50:14.151469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.156 [2024-11-26 19:50:14.154334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.156 [2024-11-26 19:50:14.154390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.156 [2024-11-26 19:50:14.154402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.156 [2024-11-26 19:50:14.157294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.156 [2024-11-26 19:50:14.157344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.156 [2024-11-26 19:50:14.157355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.156 [2024-11-26 19:50:14.160216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.156 [2024-11-26 19:50:14.160271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.156 [2024-11-26 19:50:14.160283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.156 [2024-11-26 19:50:14.163201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.156 [2024-11-26 19:50:14.163258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.156 [2024-11-26 19:50:14.163270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.156 [2024-11-26 19:50:14.166133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.156 [2024-11-26 19:50:14.166233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.156 [2024-11-26 19:50:14.166245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.156 [2024-11-26 19:50:14.169141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.156 [2024-11-26 19:50:14.169196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.156 [2024-11-26 19:50:14.169207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.156 [2024-11-26 19:50:14.172084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.156 [2024-11-26 19:50:14.172141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.156 [2024-11-26 19:50:14.172152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.156 [2024-11-26 19:50:14.175003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.156 [2024-11-26 19:50:14.175057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.156 [2024-11-26 19:50:14.175078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.156 [2024-11-26 19:50:14.177940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.156 [2024-11-26 19:50:14.177996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.156 [2024-11-26 19:50:14.178007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.156 [2024-11-26 19:50:14.180921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.156 [2024-11-26 19:50:14.180975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.156 [2024-11-26 19:50:14.180987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.156 [2024-11-26 19:50:14.183856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.156 [2024-11-26 19:50:14.183927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.156 [2024-11-26 19:50:14.183944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.156 [2024-11-26 19:50:14.186835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.156 [2024-11-26 19:50:14.186882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.156 [2024-11-26 19:50:14.186893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.156 [2024-11-26 19:50:14.189826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.156 [2024-11-26 19:50:14.189882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.156 [2024-11-26 19:50:14.189894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.156 [2024-11-26 19:50:14.192817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.156 [2024-11-26 19:50:14.192873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.156 [2024-11-26 19:50:14.192884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.156 [2024-11-26 19:50:14.195754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.156 [2024-11-26 19:50:14.195820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.156 [2024-11-26 19:50:14.195832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.156 [2024-11-26 19:50:14.198671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.156 [2024-11-26 19:50:14.198724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.156 [2024-11-26 19:50:14.198736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.156 [2024-11-26 19:50:14.201627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.156 [2024-11-26 19:50:14.201741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.156 [2024-11-26 19:50:14.201753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.156 [2024-11-26 19:50:14.204679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.156 [2024-11-26 19:50:14.204735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.156 [2024-11-26 19:50:14.204747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.157 [2024-11-26 19:50:14.207603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.157 [2024-11-26 19:50:14.207653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.157 [2024-11-26 19:50:14.207665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.157 [2024-11-26 19:50:14.210558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.157 [2024-11-26 19:50:14.210614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.157 [2024-11-26 19:50:14.210625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.157 [2024-11-26 19:50:14.213526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.157 [2024-11-26 19:50:14.213636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.157 [2024-11-26 19:50:14.213648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.157 [2024-11-26 19:50:14.216537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.157 [2024-11-26 19:50:14.216588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.157 [2024-11-26 19:50:14.216600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.157 [2024-11-26 19:50:14.219443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.157 [2024-11-26 19:50:14.219496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.157 [2024-11-26 19:50:14.219508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.157 [2024-11-26 19:50:14.222368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.157 [2024-11-26 19:50:14.222425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.157 [2024-11-26 19:50:14.222437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.157 [2024-11-26 19:50:14.225289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.157 [2024-11-26 19:50:14.225398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.157 [2024-11-26 19:50:14.225410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.157 [2024-11-26 19:50:14.228311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.157 [2024-11-26 19:50:14.228367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.157 [2024-11-26 19:50:14.228379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.157 [2024-11-26 19:50:14.231217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.157 [2024-11-26 19:50:14.231271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.157 [2024-11-26 19:50:14.231283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.157 [2024-11-26 19:50:14.234175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.157 [2024-11-26 19:50:14.234229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.157 [2024-11-26 19:50:14.234241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.157 [2024-11-26 19:50:14.237099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.157 [2024-11-26 19:50:14.237154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.157 [2024-11-26 19:50:14.237166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.157 [2024-11-26 19:50:14.240033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.157 [2024-11-26 19:50:14.240088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.157 [2024-11-26 19:50:14.240100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.157 [2024-11-26 19:50:14.242954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.157 [2024-11-26 19:50:14.243009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.157 [2024-11-26 19:50:14.243021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.157 [2024-11-26 19:50:14.245894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.157 [2024-11-26 19:50:14.245948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.157 [2024-11-26 19:50:14.245960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.157 [2024-11-26 19:50:14.248819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.157 [2024-11-26 19:50:14.248874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.157 [2024-11-26 19:50:14.248886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.157 [2024-11-26 19:50:14.251728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.157 [2024-11-26 19:50:14.251853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.157 [2024-11-26 19:50:14.251865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.157 [2024-11-26 19:50:14.254723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.157 [2024-11-26 19:50:14.254788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.157 [2024-11-26 19:50:14.254800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.157 [2024-11-26 19:50:14.257646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.157 [2024-11-26 19:50:14.257701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.157 [2024-11-26 19:50:14.257712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.157 [2024-11-26 19:50:14.260603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.157 [2024-11-26 19:50:14.260652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.157 [2024-11-26 19:50:14.260664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.157 [2024-11-26 19:50:14.263539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.157 [2024-11-26 19:50:14.263654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.157 [2024-11-26 19:50:14.263665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.157 [2024-11-26 19:50:14.266535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.157 [2024-11-26 19:50:14.266583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.157 [2024-11-26 19:50:14.266594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.157 [2024-11-26 19:50:14.269469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.157 [2024-11-26 19:50:14.269524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.157 [2024-11-26 19:50:14.269536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.157 [2024-11-26 19:50:14.272408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.157 [2024-11-26 19:50:14.272463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.157 [2024-11-26 19:50:14.272474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.157 [2024-11-26 19:50:14.275347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.157 [2024-11-26 19:50:14.275454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.157 [2024-11-26 19:50:14.275466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.157 [2024-11-26 19:50:14.278339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.157 [2024-11-26 19:50:14.278395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.157 [2024-11-26 19:50:14.278406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.157 [2024-11-26 19:50:14.281274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.157 [2024-11-26 19:50:14.281329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.157 [2024-11-26 19:50:14.281341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.157 [2024-11-26 19:50:14.284218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.157 [2024-11-26 19:50:14.284275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.157 [2024-11-26 19:50:14.284287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.157 [2024-11-26 19:50:14.287170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.157 [2024-11-26 19:50:14.287223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.157 [2024-11-26 19:50:14.287234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.157 [2024-11-26 19:50:14.290096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.157 [2024-11-26 19:50:14.290204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.157 [2024-11-26 19:50:14.290216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.157 [2024-11-26 19:50:14.293110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.157 [2024-11-26 19:50:14.293165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.157 [2024-11-26 19:50:14.293177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.157 [2024-11-26 19:50:14.296024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.157 [2024-11-26 19:50:14.296079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.157 [2024-11-26 19:50:14.296090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.157 [2024-11-26 19:50:14.298956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.157 [2024-11-26 19:50:14.299010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.157 [2024-11-26 19:50:14.299021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.157 [2024-11-26 19:50:14.301909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.157 [2024-11-26 19:50:14.301963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.157 [2024-11-26 19:50:14.301975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.157 [2024-11-26 19:50:14.304845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.157 [2024-11-26 19:50:14.304903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.157 [2024-11-26 19:50:14.304914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.157 [2024-11-26 19:50:14.307754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.157 [2024-11-26 19:50:14.307818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.157 [2024-11-26 19:50:14.307841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.157 [2024-11-26 19:50:14.310665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.157 [2024-11-26 19:50:14.310720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.157 [2024-11-26 19:50:14.310732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.157 [2024-11-26 19:50:14.313606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.157 [2024-11-26 19:50:14.313661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.157 [2024-11-26 19:50:14.313673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.157 [2024-11-26 19:50:14.316517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.157 [2024-11-26 19:50:14.316628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.157 [2024-11-26 19:50:14.316640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.157 [2024-11-26 19:50:14.319517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.157 [2024-11-26 19:50:14.319573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.157 [2024-11-26 19:50:14.319585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.157 [2024-11-26 19:50:14.322453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.157 [2024-11-26 19:50:14.322508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.157 [2024-11-26 19:50:14.322520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.157 [2024-11-26 19:50:14.325383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.158 [2024-11-26 19:50:14.325439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.158 [2024-11-26 19:50:14.325450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.158 [2024-11-26 19:50:14.328317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.158 [2024-11-26 19:50:14.328415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.158 [2024-11-26 19:50:14.328427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.158 [2024-11-26 19:50:14.331319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.158 [2024-11-26 19:50:14.331373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.158 [2024-11-26 19:50:14.331384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.158 [2024-11-26 19:50:14.334252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.158 [2024-11-26 19:50:14.334307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.158 [2024-11-26 19:50:14.334319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.158 [2024-11-26 19:50:14.337192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.158 [2024-11-26 19:50:14.337245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.158 [2024-11-26 19:50:14.337257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.158 [2024-11-26 19:50:14.340108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.158 [2024-11-26 19:50:14.340164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.158 [2024-11-26 19:50:14.340176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.158 [2024-11-26 19:50:14.343053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.158 [2024-11-26 19:50:14.343172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.158 [2024-11-26 19:50:14.343183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.158 [2024-11-26 19:50:14.346068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.158 [2024-11-26 19:50:14.346123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.158 [2024-11-26 19:50:14.346135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.158 [2024-11-26 19:50:14.349000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.158 [2024-11-26 19:50:14.349058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.158 [2024-11-26 19:50:14.349069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.158 [2024-11-26 19:50:14.351945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.158 [2024-11-26 19:50:14.351999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.158 [2024-11-26 19:50:14.352010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.158 [2024-11-26 19:50:14.354859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.158 [2024-11-26 19:50:14.354916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.158 [2024-11-26 19:50:14.354927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.158 [2024-11-26 19:50:14.357784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.158 [2024-11-26 19:50:14.357839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.158 [2024-11-26 19:50:14.357851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.158 [2024-11-26 19:50:14.360717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.158 [2024-11-26 19:50:14.360784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.158 [2024-11-26 19:50:14.360796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.158 [2024-11-26 19:50:14.363650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.158 [2024-11-26 19:50:14.363705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.158 [2024-11-26 19:50:14.363722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.158 [2024-11-26 19:50:14.366567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.158 [2024-11-26 19:50:14.366621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.158 [2024-11-26 19:50:14.366638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.158 [2024-11-26 19:50:14.369494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.158 [2024-11-26 19:50:14.369594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.158 [2024-11-26 19:50:14.369606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.158 [2024-11-26 19:50:14.372509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.158 [2024-11-26 19:50:14.372564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.158 [2024-11-26 19:50:14.372575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.158 [2024-11-26 19:50:14.375499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.158 [2024-11-26 19:50:14.375556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.158 [2024-11-26 19:50:14.375568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.158 [2024-11-26 19:50:14.378535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.158 [2024-11-26 19:50:14.378590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.158 [2024-11-26 19:50:14.378602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.158 [2024-11-26 19:50:14.381471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.158 [2024-11-26 19:50:14.381580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.158 [2024-11-26 19:50:14.381591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.158 [2024-11-26 19:50:14.384497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.158 [2024-11-26 19:50:14.384552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.158 [2024-11-26 19:50:14.384564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.158 [2024-11-26 19:50:14.387452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.158 [2024-11-26 19:50:14.387507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.158 [2024-11-26 19:50:14.387519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.158 [2024-11-26 19:50:14.390354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.158 [2024-11-26 19:50:14.390409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.158 [2024-11-26 19:50:14.390421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.158 [2024-11-26 19:50:14.393309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.158 [2024-11-26 19:50:14.393417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.158 [2024-11-26 19:50:14.393428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.158 [2024-11-26 19:50:14.396329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.158 [2024-11-26 19:50:14.396384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.158 [2024-11-26 19:50:14.396396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.158 [2024-11-26 19:50:14.399253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.158 [2024-11-26 19:50:14.399308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.158 [2024-11-26 19:50:14.399319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.418 [2024-11-26 19:50:14.402155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.418 [2024-11-26 19:50:14.402210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.418 [2024-11-26 19:50:14.402221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.418 [2024-11-26 19:50:14.405065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.418 [2024-11-26 19:50:14.405121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.418 [2024-11-26 19:50:14.405132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.418 [2024-11-26 19:50:14.407998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.418 [2024-11-26 19:50:14.408054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.418 [2024-11-26 19:50:14.408065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.418 [2024-11-26 19:50:14.410949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.418 [2024-11-26 19:50:14.411005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.418 [2024-11-26 19:50:14.411017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.418 [2024-11-26 19:50:14.413908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.418 [2024-11-26 19:50:14.413962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.418 [2024-11-26 19:50:14.413973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.418 [2024-11-26 19:50:14.416820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.418 [2024-11-26 19:50:14.416875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.418 [2024-11-26 19:50:14.416887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.418 [2024-11-26 19:50:14.419750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.418 [2024-11-26 19:50:14.419875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.418 [2024-11-26 19:50:14.419886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.418 [2024-11-26 19:50:14.422742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.418 [2024-11-26 19:50:14.422809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.418 [2024-11-26 19:50:14.422821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.418 [2024-11-26 19:50:14.425656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.418 [2024-11-26 19:50:14.425704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.418 [2024-11-26 19:50:14.425715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.418 [2024-11-26 19:50:14.428585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.418 [2024-11-26 19:50:14.428636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.418 [2024-11-26 19:50:14.428647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.418 [2024-11-26 19:50:14.431520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.418 [2024-11-26 19:50:14.431618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.418 [2024-11-26 19:50:14.431630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.418 [2024-11-26 19:50:14.434539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.418 [2024-11-26 19:50:14.434595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.418 [2024-11-26 19:50:14.434607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.418 [2024-11-26 19:50:14.437453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.418 [2024-11-26 19:50:14.437509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.418 [2024-11-26 19:50:14.437521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.418 [2024-11-26 19:50:14.440363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.418 [2024-11-26 19:50:14.440418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.418 [2024-11-26 19:50:14.440430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.418 [2024-11-26 19:50:14.443345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.418 [2024-11-26 19:50:14.443445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.418 [2024-11-26 19:50:14.443457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.418 [2024-11-26 19:50:14.446444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.418 [2024-11-26 19:50:14.446498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.418 [2024-11-26 19:50:14.446509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.418 [2024-11-26 19:50:14.449499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.418 [2024-11-26 19:50:14.449548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.418 [2024-11-26 19:50:14.449561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.418 [2024-11-26 19:50:14.452602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.418 [2024-11-26 19:50:14.452646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.418 [2024-11-26 19:50:14.452659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.418 [2024-11-26 19:50:14.455656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.418 [2024-11-26 19:50:14.455711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.418 [2024-11-26 19:50:14.455722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.418 [2024-11-26 19:50:14.458711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.418 [2024-11-26 19:50:14.458842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.418 [2024-11-26 19:50:14.458854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.418 [2024-11-26 19:50:14.461805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.418 [2024-11-26 19:50:14.461868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.418 [2024-11-26 19:50:14.461880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.418 [2024-11-26 19:50:14.464807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.419 [2024-11-26 19:50:14.464856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.419 [2024-11-26 19:50:14.464868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.419 [2024-11-26 19:50:14.467739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.419 [2024-11-26 19:50:14.467805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.419 [2024-11-26 19:50:14.467817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.419 [2024-11-26 19:50:14.470639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.419 [2024-11-26 19:50:14.470740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.419 [2024-11-26 19:50:14.470752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.419 [2024-11-26 19:50:14.473660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.419 [2024-11-26 19:50:14.473710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.419 [2024-11-26 19:50:14.473722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.419 [2024-11-26 19:50:14.476582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.419 [2024-11-26 19:50:14.476638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.419 [2024-11-26 19:50:14.476650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.419 [2024-11-26 19:50:14.479553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.419 [2024-11-26 19:50:14.479608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.419 [2024-11-26 19:50:14.479620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.419 [2024-11-26 19:50:14.482490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.419 [2024-11-26 19:50:14.482588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.419 [2024-11-26 19:50:14.482600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.419 [2024-11-26 19:50:14.485496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.419 [2024-11-26 19:50:14.485551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.419 [2024-11-26 19:50:14.485563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.419 [2024-11-26 19:50:14.488424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.419 [2024-11-26 19:50:14.488481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.419 [2024-11-26 19:50:14.488493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.419 [2024-11-26 19:50:14.491372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.419 [2024-11-26 19:50:14.491428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.419 [2024-11-26 19:50:14.491440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.419 [2024-11-26 19:50:14.494320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.419 [2024-11-26 19:50:14.494423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.419 [2024-11-26 19:50:14.494434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.419 [2024-11-26 19:50:14.497335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.419 [2024-11-26 19:50:14.497391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.419 [2024-11-26 19:50:14.497402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.419 [2024-11-26 19:50:14.500300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.419 [2024-11-26 19:50:14.500350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.419 [2024-11-26 19:50:14.500362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.419 [2024-11-26 19:50:14.503282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.419 [2024-11-26 19:50:14.503338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.419 [2024-11-26 19:50:14.503349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.419 [2024-11-26 19:50:14.506233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.419 [2024-11-26 19:50:14.506290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.419 [2024-11-26 19:50:14.506302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.419 [2024-11-26 19:50:14.509179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.419 [2024-11-26 19:50:14.509304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.419 [2024-11-26 19:50:14.509315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.419 [2024-11-26 19:50:14.512238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.419 [2024-11-26 19:50:14.512289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.419 [2024-11-26 19:50:14.512301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.419 [2024-11-26 19:50:14.515223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.419 [2024-11-26 19:50:14.515279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.419 [2024-11-26 19:50:14.515291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.419 [2024-11-26 19:50:14.518178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.419 [2024-11-26 19:50:14.518242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.419 [2024-11-26 19:50:14.518254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.419 [2024-11-26 19:50:14.521135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.419 [2024-11-26 19:50:14.521194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.419 [2024-11-26 19:50:14.521206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.419 [2024-11-26 19:50:14.524093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.419 [2024-11-26 19:50:14.524214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.419 [2024-11-26 19:50:14.524226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.419 [2024-11-26 19:50:14.527163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.419 [2024-11-26 19:50:14.527218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.419 [2024-11-26 19:50:14.527230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.419 [2024-11-26 19:50:14.530067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.419 [2024-11-26 19:50:14.530122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.419 [2024-11-26 19:50:14.530134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.419 [2024-11-26 19:50:14.533034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.419 [2024-11-26 19:50:14.533090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.419 [2024-11-26 19:50:14.533102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.419 [2024-11-26 19:50:14.536001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.419 [2024-11-26 19:50:14.536056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.419 [2024-11-26 19:50:14.536068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.419 [2024-11-26 19:50:14.538932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.419 [2024-11-26 19:50:14.538988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.419 [2024-11-26 19:50:14.539001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.419 [2024-11-26 19:50:14.541886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.419 [2024-11-26 19:50:14.541942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.419 [2024-11-26 19:50:14.541954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.419 [2024-11-26 19:50:14.544839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.419 [2024-11-26 19:50:14.544896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.420 [2024-11-26 19:50:14.544907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.420 [2024-11-26 19:50:14.547793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.420 [2024-11-26 19:50:14.547849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.420 [2024-11-26 19:50:14.547861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.420 [2024-11-26 19:50:14.550716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.420 [2024-11-26 19:50:14.550858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.420 [2024-11-26 19:50:14.550870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.420 [2024-11-26 19:50:14.553780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.420 [2024-11-26 19:50:14.553835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.420 [2024-11-26 19:50:14.553847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.420 [2024-11-26 19:50:14.556712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.420 [2024-11-26 19:50:14.556780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.420 [2024-11-26 19:50:14.556792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.420 [2024-11-26 19:50:14.559662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.420 [2024-11-26 19:50:14.559714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.420 [2024-11-26 19:50:14.559726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.420 [2024-11-26 19:50:14.562652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.420 [2024-11-26 19:50:14.562760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.420 [2024-11-26 19:50:14.562782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.420 [2024-11-26 19:50:14.565684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.420 [2024-11-26 19:50:14.565736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.420 [2024-11-26 19:50:14.565748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.420 [2024-11-26 19:50:14.568635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.420 [2024-11-26 19:50:14.568693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.420 [2024-11-26 19:50:14.568704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.420 [2024-11-26 19:50:14.571601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.420 [2024-11-26 19:50:14.571651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.420 [2024-11-26 19:50:14.571662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.420 [2024-11-26 19:50:14.574549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.420 [2024-11-26 19:50:14.574668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.420 [2024-11-26 19:50:14.574679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.420 [2024-11-26 19:50:14.577577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.420 [2024-11-26 19:50:14.577636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.420 [2024-11-26 19:50:14.577648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.420 [2024-11-26 19:50:14.580521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.420 [2024-11-26 19:50:14.580578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.420 [2024-11-26 19:50:14.580590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.420 [2024-11-26 19:50:14.583493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.420 [2024-11-26 19:50:14.583548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.420 [2024-11-26 19:50:14.583561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.420 [2024-11-26 19:50:14.586426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.420 [2024-11-26 19:50:14.586532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.420 [2024-11-26 19:50:14.586544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.420 [2024-11-26 19:50:14.589484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.420 [2024-11-26 19:50:14.589541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.420 [2024-11-26 19:50:14.589553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.420 [2024-11-26 19:50:14.592443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.420 [2024-11-26 19:50:14.592499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.420 [2024-11-26 19:50:14.592510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.420 [2024-11-26 19:50:14.595408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.420 [2024-11-26 19:50:14.595466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.420 [2024-11-26 19:50:14.595478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.420 [2024-11-26 19:50:14.598397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.420 [2024-11-26 19:50:14.598510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.420 [2024-11-26 19:50:14.598521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.420 [2024-11-26 19:50:14.601459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.420 [2024-11-26 19:50:14.601509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.420 [2024-11-26 19:50:14.601522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.420 [2024-11-26 19:50:14.604416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.420 [2024-11-26 19:50:14.604468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.420 [2024-11-26 19:50:14.604480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.420 [2024-11-26 19:50:14.607372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.420 [2024-11-26 19:50:14.607419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.420 [2024-11-26 19:50:14.607431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.420 [2024-11-26 19:50:14.610307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.420 [2024-11-26 19:50:14.610415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.420 [2024-11-26 19:50:14.610427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.420 [2024-11-26 19:50:14.613352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.420 [2024-11-26 19:50:14.613402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.420 [2024-11-26 19:50:14.613414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.420 [2024-11-26 19:50:14.616387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.420 [2024-11-26 19:50:14.616506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.420 [2024-11-26 19:50:14.616600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.420 [2024-11-26 19:50:14.619416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.420 [2024-11-26 19:50:14.619536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.420 [2024-11-26 19:50:14.619611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.420 [2024-11-26 19:50:14.622491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.420 [2024-11-26 19:50:14.622595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.420 [2024-11-26 19:50:14.622684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.420 [2024-11-26 19:50:14.625523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.420 [2024-11-26 19:50:14.625639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.420 [2024-11-26 19:50:14.625715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.421 [2024-11-26 19:50:14.628582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.421 [2024-11-26 19:50:14.628698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.421 [2024-11-26 19:50:14.628783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.421 [2024-11-26 19:50:14.631601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.421 [2024-11-26 19:50:14.631718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.421 [2024-11-26 19:50:14.631805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.421 [2024-11-26 19:50:14.634621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.421 [2024-11-26 19:50:14.634725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.421 [2024-11-26 19:50:14.634808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.421 [2024-11-26 19:50:14.637629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.421 [2024-11-26 19:50:14.637725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.421 [2024-11-26 19:50:14.637738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.421 [2024-11-26 19:50:14.640670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.421 [2024-11-26 19:50:14.640710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.421 [2024-11-26 19:50:14.640722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.421 [2024-11-26 19:50:14.643617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.421 [2024-11-26 19:50:14.643674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.421 [2024-11-26 19:50:14.643685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.421 [2024-11-26 19:50:14.646588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.421 [2024-11-26 19:50:14.646645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.421 [2024-11-26 19:50:14.646657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.421 [2024-11-26 19:50:14.649523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.421 [2024-11-26 19:50:14.649625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.421 [2024-11-26 19:50:14.649638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.421 [2024-11-26 19:50:14.652569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.421 [2024-11-26 19:50:14.652626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.421 [2024-11-26 19:50:14.652638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.421 [2024-11-26 19:50:14.655540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.421 [2024-11-26 19:50:14.655589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.421 [2024-11-26 19:50:14.655601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.421 [2024-11-26 19:50:14.658472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.421 [2024-11-26 19:50:14.658530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.421 [2024-11-26 19:50:14.658542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.680 [2024-11-26 19:50:14.661451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.680 [2024-11-26 19:50:14.661569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.680 [2024-11-26 19:50:14.661581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.680 [2024-11-26 19:50:14.664499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.680 [2024-11-26 19:50:14.664553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.680 [2024-11-26 19:50:14.664565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.680 [2024-11-26 19:50:14.667439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.680 [2024-11-26 19:50:14.667497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.680 [2024-11-26 19:50:14.667508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.680 [2024-11-26 19:50:14.670377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.680 [2024-11-26 19:50:14.670434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.680 [2024-11-26 19:50:14.670446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.680 [2024-11-26 19:50:14.673332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.680 [2024-11-26 19:50:14.673450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.680 [2024-11-26 19:50:14.673462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.680 [2024-11-26 19:50:14.676367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.680 [2024-11-26 19:50:14.676417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.680 [2024-11-26 19:50:14.676429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.680 [2024-11-26 19:50:14.679309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.680 [2024-11-26 19:50:14.679360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.680 [2024-11-26 19:50:14.679372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.680 [2024-11-26 19:50:14.682243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.680 [2024-11-26 19:50:14.682301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.680 [2024-11-26 19:50:14.682313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.680 [2024-11-26 19:50:14.685226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.680 [2024-11-26 19:50:14.685283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.680 [2024-11-26 19:50:14.685295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.680 [2024-11-26 19:50:14.688178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.680 [2024-11-26 19:50:14.688233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.680 [2024-11-26 19:50:14.688245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.680 [2024-11-26 19:50:14.691131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.680 [2024-11-26 19:50:14.691188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.680 [2024-11-26 19:50:14.691200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.680 [2024-11-26 19:50:14.694071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.680 [2024-11-26 19:50:14.694128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.680 [2024-11-26 19:50:14.694140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.680 [2024-11-26 19:50:14.697025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.680 [2024-11-26 19:50:14.697083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.680 [2024-11-26 19:50:14.697095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.680 [2024-11-26 19:50:14.699967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.680 [2024-11-26 19:50:14.700024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.680 [2024-11-26 19:50:14.700036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.680 [2024-11-26 19:50:14.702922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.680 [2024-11-26 19:50:14.702990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.680 [2024-11-26 19:50:14.703002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.680 [2024-11-26 19:50:14.705894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.680 [2024-11-26 19:50:14.705950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.680 [2024-11-26 19:50:14.705962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.680 [2024-11-26 19:50:14.708866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.680 [2024-11-26 19:50:14.708924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.680 [2024-11-26 19:50:14.708937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.680 [2024-11-26 19:50:14.711798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.680 [2024-11-26 19:50:14.711857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.680 [2024-11-26 19:50:14.711868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.680 [2024-11-26 19:50:14.714720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.680 [2024-11-26 19:50:14.714850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.681 [2024-11-26 19:50:14.714862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.681 [2024-11-26 19:50:14.717761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.681 [2024-11-26 19:50:14.717828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.681 [2024-11-26 19:50:14.717840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.681 [2024-11-26 19:50:14.720713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.681 [2024-11-26 19:50:14.720775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.681 [2024-11-26 19:50:14.720787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.681 [2024-11-26 19:50:14.723662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.681 [2024-11-26 19:50:14.723720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.681 [2024-11-26 19:50:14.723731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.681 [2024-11-26 19:50:14.726583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.681 [2024-11-26 19:50:14.726700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.681 [2024-11-26 19:50:14.726713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.681 [2024-11-26 19:50:14.729597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.681 [2024-11-26 19:50:14.729653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.681 [2024-11-26 19:50:14.729665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.681 [2024-11-26 19:50:14.732547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.681 [2024-11-26 19:50:14.732609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.681 [2024-11-26 19:50:14.732621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.681 [2024-11-26 19:50:14.735514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.681 [2024-11-26 19:50:14.735571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.681 [2024-11-26 19:50:14.735583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.681 [2024-11-26 19:50:14.738471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.681 [2024-11-26 19:50:14.738573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.681 [2024-11-26 19:50:14.738584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.681 [2024-11-26 19:50:14.741511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.681 [2024-11-26 19:50:14.741562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.681 [2024-11-26 19:50:14.741574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.681 [2024-11-26 19:50:14.744470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.681 [2024-11-26 19:50:14.744527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.681 [2024-11-26 19:50:14.744540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.681 [2024-11-26 19:50:14.747440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.681 [2024-11-26 19:50:14.747491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.681 [2024-11-26 19:50:14.747503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.681 [2024-11-26 19:50:14.750380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.681 [2024-11-26 19:50:14.750499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.681 [2024-11-26 19:50:14.750511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.681 [2024-11-26 19:50:14.753414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.681 [2024-11-26 19:50:14.753458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.681 [2024-11-26 19:50:14.753470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.681 [2024-11-26 19:50:14.756377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.681 [2024-11-26 19:50:14.756433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.681 [2024-11-26 19:50:14.756444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.681 [2024-11-26 19:50:14.759324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.681 [2024-11-26 19:50:14.759379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.681 [2024-11-26 19:50:14.759391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.681 [2024-11-26 19:50:14.762282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.681 [2024-11-26 19:50:14.762334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.681 [2024-11-26 19:50:14.762346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.681 [2024-11-26 19:50:14.765338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.681 [2024-11-26 19:50:14.765458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.681 [2024-11-26 19:50:14.765470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.681 [2024-11-26 19:50:14.768455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.681 [2024-11-26 19:50:14.768515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.681 [2024-11-26 19:50:14.768527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.681 [2024-11-26 19:50:14.771502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.681 [2024-11-26 19:50:14.771568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.681 [2024-11-26 19:50:14.771580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.681 [2024-11-26 19:50:14.774466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.681 [2024-11-26 19:50:14.774518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.681 [2024-11-26 19:50:14.774530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.681 [2024-11-26 19:50:14.777596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.681 [2024-11-26 19:50:14.777652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.681 [2024-11-26 19:50:14.777664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.681 [2024-11-26 19:50:14.780719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.681 [2024-11-26 19:50:14.780855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.681 [2024-11-26 19:50:14.780867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.681 [2024-11-26 19:50:14.783806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.681 [2024-11-26 19:50:14.783862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.681 [2024-11-26 19:50:14.783874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.681 [2024-11-26 19:50:14.786735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.681 [2024-11-26 19:50:14.786796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.681 [2024-11-26 19:50:14.786809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.681 [2024-11-26 19:50:14.789679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.681 [2024-11-26 19:50:14.789736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.681 [2024-11-26 19:50:14.789747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.681 [2024-11-26 19:50:14.792650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.681 [2024-11-26 19:50:14.792776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.681 [2024-11-26 19:50:14.792788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.681 [2024-11-26 19:50:14.795699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.681 [2024-11-26 19:50:14.795750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.681 [2024-11-26 19:50:14.795762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.681 [2024-11-26 19:50:14.798643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.681 [2024-11-26 19:50:14.798697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.681 [2024-11-26 19:50:14.798708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.681 [2024-11-26 19:50:14.801610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.681 [2024-11-26 19:50:14.801665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.681 [2024-11-26 19:50:14.801677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.681 [2024-11-26 19:50:14.804600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.681 [2024-11-26 19:50:14.804708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.681 [2024-11-26 19:50:14.804720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.681 [2024-11-26 19:50:14.807627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.681 [2024-11-26 19:50:14.807684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.681 [2024-11-26 19:50:14.807695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.681 [2024-11-26 19:50:14.810590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.681 [2024-11-26 19:50:14.810642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.681 [2024-11-26 19:50:14.810654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.681 [2024-11-26 19:50:14.813551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.681 [2024-11-26 19:50:14.813607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.681 [2024-11-26 19:50:14.813619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.681 [2024-11-26 19:50:14.816537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.681 [2024-11-26 19:50:14.816643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.681 [2024-11-26 19:50:14.816656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.681 [2024-11-26 19:50:14.819591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.681 [2024-11-26 19:50:14.819648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.681 [2024-11-26 19:50:14.819659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.681 [2024-11-26 19:50:14.822537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.681 [2024-11-26 19:50:14.822589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.681 [2024-11-26 19:50:14.822601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.681 [2024-11-26 19:50:14.825516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.681 [2024-11-26 19:50:14.825567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.681 [2024-11-26 19:50:14.825579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.681 [2024-11-26 19:50:14.828494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.681 [2024-11-26 19:50:14.828601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.681 [2024-11-26 19:50:14.828613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.681 [2024-11-26 19:50:14.831539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.681 [2024-11-26 19:50:14.831595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.681 [2024-11-26 19:50:14.831607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.681 [2024-11-26 19:50:14.834492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.681 [2024-11-26 19:50:14.834549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.681 [2024-11-26 19:50:14.834561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.681 [2024-11-26 19:50:14.837438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.681 [2024-11-26 19:50:14.837495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.681 [2024-11-26 19:50:14.837506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.681 [2024-11-26 19:50:14.840418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.681 [2024-11-26 19:50:14.840544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.681 [2024-11-26 19:50:14.840556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.681 [2024-11-26 19:50:14.843487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.681 [2024-11-26 19:50:14.843541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.681 [2024-11-26 19:50:14.843553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.681 [2024-11-26 19:50:14.846438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.681 [2024-11-26 19:50:14.846488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.681 [2024-11-26 19:50:14.846500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.681 [2024-11-26 19:50:14.849358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.681 [2024-11-26 19:50:14.849415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.681 [2024-11-26 19:50:14.849427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.681 [2024-11-26 19:50:14.852360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.681 [2024-11-26 19:50:14.852484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.681 [2024-11-26 19:50:14.852496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.681 [2024-11-26 19:50:14.855368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.681 [2024-11-26 19:50:14.855418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.681 [2024-11-26 19:50:14.855431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.681 [2024-11-26 19:50:14.858307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.681 [2024-11-26 19:50:14.858361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.681 [2024-11-26 19:50:14.858373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.681 [2024-11-26 19:50:14.861254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.682 [2024-11-26 19:50:14.861306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.682 [2024-11-26 19:50:14.861318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.682 [2024-11-26 19:50:14.864233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.682 [2024-11-26 19:50:14.864289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.682 [2024-11-26 19:50:14.864301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.682 [2024-11-26 19:50:14.867183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.682 [2024-11-26 19:50:14.867293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.682 [2024-11-26 19:50:14.867305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.682 [2024-11-26 19:50:14.870211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.682 [2024-11-26 19:50:14.870262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.682 [2024-11-26 19:50:14.870274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.682 [2024-11-26 19:50:14.873176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.682 [2024-11-26 19:50:14.873231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.682 [2024-11-26 19:50:14.873243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.682 [2024-11-26 19:50:14.876164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.682 [2024-11-26 19:50:14.876221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.682 [2024-11-26 19:50:14.876234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.682 [2024-11-26 19:50:14.879119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.682 [2024-11-26 19:50:14.879176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.682 [2024-11-26 19:50:14.879188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.682 [2024-11-26 19:50:14.882055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.682 [2024-11-26 19:50:14.882184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.682 [2024-11-26 19:50:14.882196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.682 [2024-11-26 19:50:14.885107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.682 [2024-11-26 19:50:14.885159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.682 [2024-11-26 19:50:14.885171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.682 [2024-11-26 19:50:14.888123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.682 [2024-11-26 19:50:14.888181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.682 [2024-11-26 19:50:14.888194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.682 [2024-11-26 19:50:14.891162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.682 [2024-11-26 19:50:14.891220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.682 [2024-11-26 19:50:14.891232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.682 [2024-11-26 19:50:14.894180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.682 [2024-11-26 19:50:14.894236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.682 [2024-11-26 19:50:14.894248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.682 [2024-11-26 19:50:14.897150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.682 [2024-11-26 19:50:14.897266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.682 [2024-11-26 19:50:14.897278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.682 [2024-11-26 19:50:14.900226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.682 [2024-11-26 19:50:14.900284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.682 [2024-11-26 19:50:14.900296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.682 [2024-11-26 19:50:14.903273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.682 [2024-11-26 19:50:14.903329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.682 [2024-11-26 19:50:14.903342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.682 [2024-11-26 19:50:14.906250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.682 [2024-11-26 19:50:14.906307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.682 [2024-11-26 19:50:14.906319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.682 [2024-11-26 19:50:14.909241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.682 [2024-11-26 19:50:14.909292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.682 [2024-11-26 19:50:14.909304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.682 [2024-11-26 19:50:14.912181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.682 [2024-11-26 19:50:14.912306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.682 [2024-11-26 19:50:14.912318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.682 [2024-11-26 19:50:14.915344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.682 [2024-11-26 19:50:14.915399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.682 [2024-11-26 19:50:14.915412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.682 [2024-11-26 19:50:14.918352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.682 [2024-11-26 19:50:14.918410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.682 [2024-11-26 19:50:14.918422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.682 [2024-11-26 19:50:14.921379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.682 [2024-11-26 19:50:14.921420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.682 [2024-11-26 19:50:14.921432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.939 [2024-11-26 19:50:14.924469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.939 [2024-11-26 19:50:14.924526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.939 [2024-11-26 19:50:14.924538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.939 [2024-11-26 19:50:14.927504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.939 [2024-11-26 19:50:14.927618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.939 [2024-11-26 19:50:14.927630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.939 [2024-11-26 19:50:14.930598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.939 [2024-11-26 19:50:14.930659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.939 [2024-11-26 19:50:14.930671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.939 [2024-11-26 19:50:14.933630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.939 [2024-11-26 19:50:14.933681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.939 [2024-11-26 19:50:14.933693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.939 [2024-11-26 19:50:14.936675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.939 [2024-11-26 19:50:14.936728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.939 [2024-11-26 19:50:14.936740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.939 [2024-11-26 19:50:14.939694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.939 [2024-11-26 19:50:14.939818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.939 [2024-11-26 19:50:14.939830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.939 [2024-11-26 19:50:14.942820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.939 [2024-11-26 19:50:14.942879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.939 [2024-11-26 19:50:14.942891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.939 [2024-11-26 19:50:14.945850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.939 [2024-11-26 19:50:14.945907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.939 [2024-11-26 19:50:14.945920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.939 [2024-11-26 19:50:14.948876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.939 [2024-11-26 19:50:14.948943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.939 [2024-11-26 19:50:14.948955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.939 [2024-11-26 19:50:14.951840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.939 [2024-11-26 19:50:14.951898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.939 [2024-11-26 19:50:14.951910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.939 [2024-11-26 19:50:14.954800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.939 [2024-11-26 19:50:14.954856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.939 [2024-11-26 19:50:14.954868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.939 [2024-11-26 19:50:14.957746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.939 [2024-11-26 19:50:14.957805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.939 [2024-11-26 19:50:14.957818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.939 [2024-11-26 19:50:14.960718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.939 [2024-11-26 19:50:14.960782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.939 [2024-11-26 19:50:14.960794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.939 [2024-11-26 19:50:14.963670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.939 [2024-11-26 19:50:14.963727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.939 [2024-11-26 19:50:14.963739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.939 [2024-11-26 19:50:14.966635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.939 [2024-11-26 19:50:14.966760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.939 [2024-11-26 19:50:14.966782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.939 [2024-11-26 19:50:14.969714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.939 [2024-11-26 19:50:14.969783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.939 [2024-11-26 19:50:14.969794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.939 [2024-11-26 19:50:14.972686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.939 [2024-11-26 19:50:14.972739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.939 [2024-11-26 19:50:14.972751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.939 [2024-11-26 19:50:14.975653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.939 [2024-11-26 19:50:14.975706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.939 [2024-11-26 19:50:14.975717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.939 [2024-11-26 19:50:14.978600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.939 [2024-11-26 19:50:14.978705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.939 [2024-11-26 19:50:14.978718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.939 [2024-11-26 19:50:14.981630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.939 [2024-11-26 19:50:14.981686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.939 [2024-11-26 19:50:14.981698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.939 [2024-11-26 19:50:14.984594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.939 [2024-11-26 19:50:14.984643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.939 [2024-11-26 19:50:14.984656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.939 [2024-11-26 19:50:14.987525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.940 [2024-11-26 19:50:14.987579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.940 [2024-11-26 19:50:14.987591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.940 [2024-11-26 19:50:14.990461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.940 [2024-11-26 19:50:14.990575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.940 [2024-11-26 19:50:14.990587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.940 [2024-11-26 19:50:14.993475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.940 [2024-11-26 19:50:14.993530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.940 [2024-11-26 19:50:14.993542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.940 [2024-11-26 19:50:14.996454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.940 [2024-11-26 19:50:14.996509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.940 [2024-11-26 19:50:14.996521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.940 [2024-11-26 19:50:14.999364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.940 [2024-11-26 19:50:14.999419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.940 [2024-11-26 19:50:14.999431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.940 [2024-11-26 19:50:15.002313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.940 [2024-11-26 19:50:15.002368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.940 [2024-11-26 19:50:15.002380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.940 [2024-11-26 19:50:15.005337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.940 [2024-11-26 19:50:15.005451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.940 [2024-11-26 19:50:15.005463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.940 [2024-11-26 19:50:15.008444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.940 [2024-11-26 19:50:15.008500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.940 [2024-11-26 19:50:15.008512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:19.940 [2024-11-26 19:50:15.011442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.940 [2024-11-26 19:50:15.011499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.940 [2024-11-26 19:50:15.011511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:19.940 [2024-11-26 19:50:15.014472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.940 [2024-11-26 19:50:15.014519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.940 [2024-11-26 19:50:15.014531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:19.940 [2024-11-26 19:50:15.017509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf615b0) with pdu=0x200016eff3c8 00:16:19.940 10400.50 IOPS, 1300.06 MiB/s [2024-11-26T19:50:15.187Z] [2024-11-26 19:50:15.018911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.940 [2024-11-26 19:50:15.018936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:19.940 00:16:19.940 Latency(us) 00:16:19.940 [2024-11-26T19:50:15.187Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:19.940 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:16:19.940 nvme0n1 : 2.00 10396.16 1299.52 0.00 0.00 1535.92 1039.75 11342.77 00:16:19.940 [2024-11-26T19:50:15.187Z] =================================================================================================================== 00:16:19.940 [2024-11-26T19:50:15.187Z] Total : 10396.16 1299.52 0.00 0.00 1535.92 1039.75 11342.77 00:16:19.940 { 00:16:19.940 "results": [ 00:16:19.940 { 00:16:19.940 "job": "nvme0n1", 00:16:19.940 "core_mask": "0x2", 00:16:19.940 "workload": "randwrite", 00:16:19.940 "status": "finished", 00:16:19.940 "queue_depth": 16, 00:16:19.940 "io_size": 131072, 00:16:19.940 "runtime": 2.002759, 00:16:19.940 "iops": 10396.158499350146, 00:16:19.940 "mibps": 1299.5198124187682, 00:16:19.940 "io_failed": 0, 00:16:19.940 "io_timeout": 0, 00:16:19.940 "avg_latency_us": 1535.9227478174773, 00:16:19.940 "min_latency_us": 1039.753846153846, 00:16:19.940 "max_latency_us": 11342.76923076923 00:16:19.940 } 00:16:19.940 ], 00:16:19.940 "core_count": 1 00:16:19.940 } 00:16:19.940 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:19.940 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:19.940 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:19.940 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:19.940 | .driver_specific 00:16:19.940 | .nvme_error 00:16:19.940 | .status_code 00:16:19.940 | .command_transient_transport_error' 00:16:20.198 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 672 > 0 )) 00:16:20.198 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 78986 00:16:20.198 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 78986 ']' 00:16:20.198 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 78986 00:16:20.198 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:16:20.198 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:20.198 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78986 00:16:20.198 killing process with pid 78986 00:16:20.198 Received shutdown signal, test time was about 2.000000 seconds 00:16:20.198 00:16:20.198 Latency(us) 00:16:20.198 [2024-11-26T19:50:15.445Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:20.198 [2024-11-26T19:50:15.445Z] =================================================================================================================== 00:16:20.198 [2024-11-26T19:50:15.445Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:20.198 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:20.198 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:20.198 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78986' 00:16:20.198 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 78986 00:16:20.198 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 78986 00:16:20.198 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 78790 00:16:20.198 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 78790 ']' 00:16:20.198 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 78790 00:16:20.198 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:16:20.198 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:20.198 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78790 00:16:20.198 killing process with pid 78790 00:16:20.198 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:20.198 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:20.198 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78790' 00:16:20.198 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 78790 00:16:20.198 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 78790 00:16:20.455 00:16:20.455 real 0m16.388s 00:16:20.455 user 0m31.385s 00:16:20.455 sys 0m3.870s 00:16:20.455 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:20.455 ************************************ 00:16:20.455 END TEST nvmf_digest_error 00:16:20.455 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:20.455 ************************************ 00:16:20.455 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:16:20.455 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:16:20.455 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:20.455 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:16:20.455 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:20.455 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:16:20.455 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:20.455 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:20.455 rmmod nvme_tcp 00:16:20.455 rmmod nvme_fabrics 00:16:20.455 rmmod nvme_keyring 00:16:20.455 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:20.455 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:16:20.455 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:16:20.455 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 78790 ']' 00:16:20.455 Process with pid 78790 is not found 00:16:20.455 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 78790 00:16:20.455 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 78790 ']' 00:16:20.455 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 78790 00:16:20.455 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (78790) - No such process 00:16:20.455 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 78790 is not found' 00:16:20.455 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:20.455 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:20.455 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:20.455 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:16:20.455 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:16:20.455 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:20.455 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:16:20.455 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:20.455 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:20.455 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:20.455 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:20.455 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:20.455 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:20.746 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:20.746 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:20.746 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:20.746 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:20.746 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:20.746 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:20.746 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:20.746 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:20.746 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:20.746 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:20.746 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:20.746 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:20.746 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:20.746 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:16:20.746 ************************************ 00:16:20.746 END TEST nvmf_digest 00:16:20.746 ************************************ 00:16:20.746 00:16:20.746 real 0m33.906s 00:16:20.746 user 1m4.005s 00:16:20.746 sys 0m7.763s 00:16:20.746 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:20.746 19:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:16:20.746 19:50:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:16:20.746 19:50:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:16:20.746 19:50:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:16:20.746 19:50:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:20.746 19:50:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:20.746 19:50:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.746 ************************************ 00:16:20.746 START TEST nvmf_host_multipath 00:16:20.746 ************************************ 00:16:20.746 19:50:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:16:20.746 * Looking for test storage... 00:16:20.746 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:20.746 19:50:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:20.746 19:50:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:16:20.746 19:50:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:21.004 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:21.004 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:21.004 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:21.004 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:21.004 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:16:21.004 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:21.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.005 --rc genhtml_branch_coverage=1 00:16:21.005 --rc genhtml_function_coverage=1 00:16:21.005 --rc genhtml_legend=1 00:16:21.005 --rc geninfo_all_blocks=1 00:16:21.005 --rc geninfo_unexecuted_blocks=1 00:16:21.005 00:16:21.005 ' 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:21.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.005 --rc genhtml_branch_coverage=1 00:16:21.005 --rc genhtml_function_coverage=1 00:16:21.005 --rc genhtml_legend=1 00:16:21.005 --rc geninfo_all_blocks=1 00:16:21.005 --rc geninfo_unexecuted_blocks=1 00:16:21.005 00:16:21.005 ' 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:21.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.005 --rc genhtml_branch_coverage=1 00:16:21.005 --rc genhtml_function_coverage=1 00:16:21.005 --rc genhtml_legend=1 00:16:21.005 --rc geninfo_all_blocks=1 00:16:21.005 --rc geninfo_unexecuted_blocks=1 00:16:21.005 00:16:21.005 ' 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:21.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.005 --rc genhtml_branch_coverage=1 00:16:21.005 --rc genhtml_function_coverage=1 00:16:21.005 --rc genhtml_legend=1 00:16:21.005 --rc geninfo_all_blocks=1 00:16:21.005 --rc geninfo_unexecuted_blocks=1 00:16:21.005 00:16:21.005 ' 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=91838eb1-5852-43eb-90b2-09876f360ab2 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:21.005 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:21.005 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:21.006 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:21.006 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:21.006 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:21.006 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:21.006 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:21.006 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:21.006 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:21.006 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:21.006 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:21.006 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:21.006 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:21.006 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:21.006 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:21.006 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:21.006 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:21.006 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:21.006 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:21.006 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:21.006 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:21.006 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:21.006 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:21.006 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:21.006 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:21.006 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:21.006 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:21.006 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:21.006 Cannot find device "nvmf_init_br" 00:16:21.006 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:16:21.006 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:21.006 Cannot find device "nvmf_init_br2" 00:16:21.006 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:16:21.006 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:21.006 Cannot find device "nvmf_tgt_br" 00:16:21.006 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:16:21.006 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:21.006 Cannot find device "nvmf_tgt_br2" 00:16:21.006 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:16:21.006 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:21.006 Cannot find device "nvmf_init_br" 00:16:21.006 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:16:21.006 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:21.006 Cannot find device "nvmf_init_br2" 00:16:21.006 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:16:21.006 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:21.006 Cannot find device "nvmf_tgt_br" 00:16:21.006 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:16:21.006 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:21.006 Cannot find device "nvmf_tgt_br2" 00:16:21.006 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:16:21.006 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:21.006 Cannot find device "nvmf_br" 00:16:21.006 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:16:21.006 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:21.006 Cannot find device "nvmf_init_if" 00:16:21.006 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:16:21.006 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:21.006 Cannot find device "nvmf_init_if2" 00:16:21.006 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:16:21.006 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:21.006 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:21.006 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:16:21.006 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:21.006 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:21.006 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:16:21.006 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:21.006 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:21.006 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:21.006 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:21.006 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:21.263 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:21.263 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:21.264 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:21.264 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:21.264 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:21.264 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:21.264 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:21.264 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:21.264 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:21.264 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:21.264 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:21.264 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:21.264 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:21.264 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:21.264 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:21.264 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:21.264 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:21.264 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:21.264 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:21.264 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:21.264 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:21.264 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:21.264 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:21.264 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:21.264 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:21.264 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:21.264 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:21.264 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:21.264 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:21.264 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:16:21.264 00:16:21.264 --- 10.0.0.3 ping statistics --- 00:16:21.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:21.264 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:16:21.264 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:21.264 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:21.264 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.026 ms 00:16:21.264 00:16:21.264 --- 10.0.0.4 ping statistics --- 00:16:21.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:21.264 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:16:21.264 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:21.264 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:21.264 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:16:21.264 00:16:21.264 --- 10.0.0.1 ping statistics --- 00:16:21.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:21.264 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:16:21.264 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:21.264 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:21.264 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:16:21.264 00:16:21.264 --- 10.0.0.2 ping statistics --- 00:16:21.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:21.264 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:16:21.264 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:21.264 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:16:21.264 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:21.264 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:21.264 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:21.264 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:21.264 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:21.264 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:21.264 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:21.264 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:16:21.264 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:21.264 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:21.264 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:21.264 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=79304 00:16:21.264 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:21.264 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 79304 00:16:21.264 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 79304 ']' 00:16:21.264 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:21.264 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:21.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:21.264 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:21.264 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:21.264 19:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:21.264 [2024-11-26 19:50:16.444939] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:16:21.264 [2024-11-26 19:50:16.444990] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:21.521 [2024-11-26 19:50:16.576029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:21.521 [2024-11-26 19:50:16.606144] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:21.521 [2024-11-26 19:50:16.606303] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:21.521 [2024-11-26 19:50:16.606357] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:21.521 [2024-11-26 19:50:16.606378] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:21.521 [2024-11-26 19:50:16.606390] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:21.521 [2024-11-26 19:50:16.607011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:21.521 [2024-11-26 19:50:16.607016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.521 [2024-11-26 19:50:16.636300] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:22.085 19:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:22.085 19:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:16:22.085 19:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:22.085 19:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:22.085 19:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:22.085 19:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:22.085 19:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=79304 00:16:22.085 19:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:22.342 [2024-11-26 19:50:17.503853] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:22.342 19:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:22.600 Malloc0 00:16:22.600 19:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:16:22.857 19:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:22.857 19:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:23.115 [2024-11-26 19:50:18.250038] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:23.115 19:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:16:23.374 [2024-11-26 19:50:18.414109] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:16:23.374 19:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:16:23.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:23.374 19:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=79349 00:16:23.374 19:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:23.374 19:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 79349 /var/tmp/bdevperf.sock 00:16:23.374 19:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 79349 ']' 00:16:23.374 19:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:23.374 19:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:23.374 19:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:23.374 19:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:23.374 19:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:24.324 19:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:24.324 19:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:16:24.324 19:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:16:24.325 19:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:16:24.604 Nvme0n1 00:16:24.604 19:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:16:24.862 Nvme0n1 00:16:24.862 19:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:16:24.862 19:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:16:26.235 19:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:16:26.235 19:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:26.235 19:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:26.492 19:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:16:26.492 19:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79304 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:16:26.492 19:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=79394 00:16:26.492 19:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:16:33.079 19:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:16:33.079 19:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:16:33.079 19:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:16:33.079 19:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:33.079 Attaching 4 probes... 00:16:33.079 @path[10.0.0.3, 4421]: 26096 00:16:33.079 @path[10.0.0.3, 4421]: 26699 00:16:33.079 @path[10.0.0.3, 4421]: 26566 00:16:33.079 @path[10.0.0.3, 4421]: 26508 00:16:33.079 @path[10.0.0.3, 4421]: 26674 00:16:33.079 19:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:16:33.079 19:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:16:33.079 19:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:16:33.079 19:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:16:33.079 19:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:16:33.079 19:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:16:33.079 19:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 79394 00:16:33.079 19:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:33.079 19:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:16:33.079 19:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:33.080 19:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:16:33.080 19:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:16:33.080 19:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=79517 00:16:33.080 19:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:16:33.080 19:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79304 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:16:39.634 19:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:16:39.634 19:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:16:39.634 19:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:16:39.634 19:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:39.634 Attaching 4 probes... 00:16:39.634 @path[10.0.0.3, 4420]: 25007 00:16:39.634 @path[10.0.0.3, 4420]: 25644 00:16:39.634 @path[10.0.0.3, 4420]: 25458 00:16:39.634 @path[10.0.0.3, 4420]: 24888 00:16:39.634 @path[10.0.0.3, 4420]: 24400 00:16:39.634 19:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:16:39.635 19:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:16:39.635 19:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:16:39.635 19:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:16:39.635 19:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:16:39.635 19:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:16:39.635 19:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 79517 00:16:39.635 19:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:39.635 19:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:16:39.635 19:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:16:39.635 19:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:39.892 19:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:16:39.892 19:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79304 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:16:39.892 19:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=79630 00:16:39.892 19:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:16:46.455 19:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:16:46.455 19:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:16:46.455 19:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:16:46.455 19:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:46.455 Attaching 4 probes... 00:16:46.455 @path[10.0.0.3, 4421]: 17870 00:16:46.455 @path[10.0.0.3, 4421]: 25200 00:16:46.455 @path[10.0.0.3, 4421]: 25163 00:16:46.455 @path[10.0.0.3, 4421]: 25148 00:16:46.455 @path[10.0.0.3, 4421]: 25122 00:16:46.455 19:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:16:46.455 19:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:16:46.455 19:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:16:46.455 19:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:16:46.455 19:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:16:46.455 19:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:16:46.455 19:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 79630 00:16:46.455 19:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:46.455 19:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:16:46.455 19:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:16:46.455 19:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:16:46.455 19:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:16:46.455 19:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79304 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:16:46.455 19:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=79750 00:16:46.455 19:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:16:53.170 19:50:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:16:53.170 19:50:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:16:53.170 19:50:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:16:53.170 19:50:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:53.170 Attaching 4 probes... 00:16:53.170 00:16:53.170 00:16:53.170 00:16:53.170 00:16:53.170 00:16:53.170 19:50:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:16:53.170 19:50:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:16:53.170 19:50:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:16:53.170 19:50:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:16:53.170 19:50:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:16:53.170 19:50:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:16:53.170 19:50:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 79750 00:16:53.170 19:50:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:53.170 19:50:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:16:53.170 19:50:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:53.170 19:50:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:53.170 19:50:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:16:53.170 19:50:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=79869 00:16:53.170 19:50:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:16:53.170 19:50:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79304 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:16:59.788 19:50:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:16:59.788 19:50:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:16:59.788 19:50:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:16:59.788 19:50:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:59.788 Attaching 4 probes... 00:16:59.788 @path[10.0.0.3, 4421]: 24573 00:16:59.788 @path[10.0.0.3, 4421]: 24830 00:16:59.788 @path[10.0.0.3, 4421]: 24912 00:16:59.788 @path[10.0.0.3, 4421]: 24976 00:16:59.788 @path[10.0.0.3, 4421]: 22293 00:16:59.788 19:50:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:16:59.788 19:50:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:16:59.788 19:50:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:16:59.788 19:50:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:16:59.788 19:50:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:16:59.788 19:50:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:16:59.788 19:50:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 79869 00:16:59.788 19:50:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:59.788 19:50:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:16:59.788 19:50:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:17:00.722 19:50:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:17:00.722 19:50:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79304 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:00.722 19:50:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=79993 00:17:00.722 19:50:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:07.278 19:51:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:07.278 19:51:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:17:07.278 19:51:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:17:07.278 19:51:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:07.278 Attaching 4 probes... 00:17:07.278 @path[10.0.0.3, 4420]: 24661 00:17:07.278 @path[10.0.0.3, 4420]: 24962 00:17:07.278 @path[10.0.0.3, 4420]: 24987 00:17:07.278 @path[10.0.0.3, 4420]: 24871 00:17:07.278 @path[10.0.0.3, 4420]: 24877 00:17:07.278 19:51:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:07.278 19:51:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:07.278 19:51:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:17:07.278 19:51:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:17:07.278 19:51:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:17:07.278 19:51:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:17:07.278 19:51:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 79993 00:17:07.278 19:51:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:07.278 19:51:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:17:07.278 [2024-11-26 19:51:02.194728] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:17:07.278 19:51:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:07.278 19:51:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:17:13.891 19:51:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:17:13.891 19:51:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80171 00:17:13.891 19:51:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79304 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:13.891 19:51:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:20.455 19:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:20.455 19:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:20.455 19:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:17:20.455 19:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:20.455 Attaching 4 probes... 00:17:20.455 @path[10.0.0.3, 4421]: 25525 00:17:20.455 @path[10.0.0.3, 4421]: 26185 00:17:20.455 @path[10.0.0.3, 4421]: 26062 00:17:20.455 @path[10.0.0.3, 4421]: 26173 00:17:20.455 @path[10.0.0.3, 4421]: 26105 00:17:20.455 19:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:17:20.455 19:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:20.455 19:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:20.455 19:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:17:20.455 19:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:20.455 19:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:20.455 19:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80171 00:17:20.455 19:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:20.455 19:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 79349 00:17:20.455 19:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 79349 ']' 00:17:20.455 19:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 79349 00:17:20.455 19:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:17:20.455 19:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:20.455 19:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79349 00:17:20.455 killing process with pid 79349 00:17:20.455 19:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:20.455 19:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:20.455 19:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79349' 00:17:20.455 19:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 79349 00:17:20.455 19:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 79349 00:17:20.455 { 00:17:20.455 "results": [ 00:17:20.455 { 00:17:20.455 "job": "Nvme0n1", 00:17:20.455 "core_mask": "0x4", 00:17:20.455 "workload": "verify", 00:17:20.455 "status": "terminated", 00:17:20.455 "verify_range": { 00:17:20.455 "start": 0, 00:17:20.455 "length": 16384 00:17:20.455 }, 00:17:20.455 "queue_depth": 128, 00:17:20.455 "io_size": 4096, 00:17:20.455 "runtime": 54.49802, 00:17:20.455 "iops": 10721.710623615316, 00:17:20.455 "mibps": 41.88168212349733, 00:17:20.455 "io_failed": 0, 00:17:20.455 "io_timeout": 0, 00:17:20.455 "avg_latency_us": 11915.76116397246, 00:17:20.455 "min_latency_us": 368.64, 00:17:20.455 "max_latency_us": 7020619.618461538 00:17:20.455 } 00:17:20.455 ], 00:17:20.455 "core_count": 1 00:17:20.455 } 00:17:20.455 19:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 79349 00:17:20.455 19:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:20.455 [2024-11-26 19:50:18.462026] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:17:20.455 [2024-11-26 19:50:18.462092] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79349 ] 00:17:20.455 [2024-11-26 19:50:18.595900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.455 [2024-11-26 19:50:18.626998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:20.455 [2024-11-26 19:50:18.655478] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:20.455 Running I/O for 90 seconds... 00:17:20.455 9622.00 IOPS, 37.59 MiB/s [2024-11-26T19:51:15.702Z] 11304.50 IOPS, 44.16 MiB/s [2024-11-26T19:51:15.702Z] 11973.67 IOPS, 46.77 MiB/s [2024-11-26T19:51:15.702Z] 12318.75 IOPS, 48.12 MiB/s [2024-11-26T19:51:15.702Z] 12512.00 IOPS, 48.88 MiB/s [2024-11-26T19:51:15.702Z] 12636.50 IOPS, 49.36 MiB/s [2024-11-26T19:51:15.702Z] 12735.86 IOPS, 49.75 MiB/s [2024-11-26T19:51:15.702Z] [2024-11-26 19:50:28.174032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:32536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.455 [2024-11-26 19:50:28.174079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:20.455 [2024-11-26 19:50:28.174110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:32544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.455 [2024-11-26 19:50:28.174119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:20.455 [2024-11-26 19:50:28.174131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:32552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.455 [2024-11-26 19:50:28.174139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:20.455 [2024-11-26 19:50:28.174151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:32560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.455 [2024-11-26 19:50:28.174159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:20.455 [2024-11-26 19:50:28.174171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:32568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.455 [2024-11-26 19:50:28.174177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:20.455 [2024-11-26 19:50:28.174190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:32576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.455 [2024-11-26 19:50:28.174197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:20.455 [2024-11-26 19:50:28.174209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.456 [2024-11-26 19:50:28.174216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:20.456 [2024-11-26 19:50:28.174228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:32592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.456 [2024-11-26 19:50:28.174235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:20.456 [2024-11-26 19:50:28.174248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:31960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.456 [2024-11-26 19:50:28.174254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:20.456 [2024-11-26 19:50:28.174267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:31968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.456 [2024-11-26 19:50:28.174293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:20.456 [2024-11-26 19:50:28.174306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:31976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.456 [2024-11-26 19:50:28.174313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:20.456 [2024-11-26 19:50:28.174325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:31984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.456 [2024-11-26 19:50:28.174332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:20.456 [2024-11-26 19:50:28.174343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:31992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.456 [2024-11-26 19:50:28.174350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:20.456 [2024-11-26 19:50:28.174362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:32000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.456 [2024-11-26 19:50:28.174368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:20.456 [2024-11-26 19:50:28.174381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:32008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.456 [2024-11-26 19:50:28.174387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:20.456 [2024-11-26 19:50:28.174400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:32016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.456 [2024-11-26 19:50:28.174406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:20.456 [2024-11-26 19:50:28.174420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:32024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.456 [2024-11-26 19:50:28.174427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:20.456 [2024-11-26 19:50:28.174439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:32032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.456 [2024-11-26 19:50:28.174446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:20.456 [2024-11-26 19:50:28.174459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:32040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.456 [2024-11-26 19:50:28.174466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:20.456 [2024-11-26 19:50:28.174478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:32048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.456 [2024-11-26 19:50:28.174485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:20.456 [2024-11-26 19:50:28.174497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:32056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.456 [2024-11-26 19:50:28.174504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:20.456 [2024-11-26 19:50:28.174516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:32064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.456 [2024-11-26 19:50:28.174523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:20.456 [2024-11-26 19:50:28.174539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:32072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.456 [2024-11-26 19:50:28.174546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:20.456 [2024-11-26 19:50:28.174558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:32080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.456 [2024-11-26 19:50:28.174565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:20.456 [2024-11-26 19:50:28.174579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:32600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.456 [2024-11-26 19:50:28.174588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:20.456 [2024-11-26 19:50:28.174600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.456 [2024-11-26 19:50:28.174607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:20.456 [2024-11-26 19:50:28.174619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:32616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.456 [2024-11-26 19:50:28.174625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:20.456 [2024-11-26 19:50:28.174638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:32624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.456 [2024-11-26 19:50:28.174644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.456 [2024-11-26 19:50:28.174656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:32632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.456 [2024-11-26 19:50:28.174663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:20.456 [2024-11-26 19:50:28.174675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:32640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.456 [2024-11-26 19:50:28.174681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:20.456 [2024-11-26 19:50:28.174693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:32648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.456 [2024-11-26 19:50:28.174700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:20.456 [2024-11-26 19:50:28.174712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:32656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.456 [2024-11-26 19:50:28.174719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:20.456 [2024-11-26 19:50:28.174731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:32088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.456 [2024-11-26 19:50:28.174738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:20.456 [2024-11-26 19:50:28.174750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:32096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.456 [2024-11-26 19:50:28.174757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:20.456 [2024-11-26 19:50:28.174780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:32104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.456 [2024-11-26 19:50:28.174788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:20.456 [2024-11-26 19:50:28.174800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:32112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.456 [2024-11-26 19:50:28.174807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:20.456 [2024-11-26 19:50:28.174819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:32120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.456 [2024-11-26 19:50:28.174826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:20.456 [2024-11-26 19:50:28.174838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:32128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.456 [2024-11-26 19:50:28.174845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:20.456 [2024-11-26 19:50:28.174857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.456 [2024-11-26 19:50:28.174864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:20.456 [2024-11-26 19:50:28.174876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:32144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.456 [2024-11-26 19:50:28.174883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:20.456 [2024-11-26 19:50:28.174895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:32152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.456 [2024-11-26 19:50:28.174904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:20.456 [2024-11-26 19:50:28.174916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.456 [2024-11-26 19:50:28.174923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:20.456 [2024-11-26 19:50:28.174935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:32168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.456 [2024-11-26 19:50:28.174942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:20.456 [2024-11-26 19:50:28.174954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:32176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.456 [2024-11-26 19:50:28.174961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:20.456 [2024-11-26 19:50:28.174973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:32184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.457 [2024-11-26 19:50:28.174980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:20.457 [2024-11-26 19:50:28.174992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:32192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.457 [2024-11-26 19:50:28.174998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:20.457 [2024-11-26 19:50:28.175010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:32200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.457 [2024-11-26 19:50:28.175020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:20.457 [2024-11-26 19:50:28.175032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:32208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.457 [2024-11-26 19:50:28.175039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:20.457 [2024-11-26 19:50:28.175051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:32216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.457 [2024-11-26 19:50:28.175058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:20.457 [2024-11-26 19:50:28.175083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:32224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.457 [2024-11-26 19:50:28.175091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:20.457 [2024-11-26 19:50:28.175104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:32232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.457 [2024-11-26 19:50:28.175110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:20.457 [2024-11-26 19:50:28.175123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:32240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.457 [2024-11-26 19:50:28.175129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:20.457 [2024-11-26 19:50:28.175142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.457 [2024-11-26 19:50:28.175148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:20.457 [2024-11-26 19:50:28.175160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.457 [2024-11-26 19:50:28.175167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:20.457 [2024-11-26 19:50:28.175179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:32264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.457 [2024-11-26 19:50:28.175185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:20.457 [2024-11-26 19:50:28.175198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:32272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.457 [2024-11-26 19:50:28.175205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:20.457 [2024-11-26 19:50:28.175227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:32664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.457 [2024-11-26 19:50:28.175235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:20.457 [2024-11-26 19:50:28.175247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:32672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.457 [2024-11-26 19:50:28.175253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:20.457 [2024-11-26 19:50:28.175266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:32680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.457 [2024-11-26 19:50:28.175276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.457 [2024-11-26 19:50:28.175288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:32688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.457 [2024-11-26 19:50:28.175295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.457 [2024-11-26 19:50:28.175307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:32696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.457 [2024-11-26 19:50:28.175314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:20.457 [2024-11-26 19:50:28.175326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:32704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.457 [2024-11-26 19:50:28.175333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:20.457 [2024-11-26 19:50:28.175345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:32712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.457 [2024-11-26 19:50:28.175352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:20.457 [2024-11-26 19:50:28.175364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:32720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.457 [2024-11-26 19:50:28.175370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:20.457 [2024-11-26 19:50:28.175383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:32280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.457 [2024-11-26 19:50:28.175390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:20.457 [2024-11-26 19:50:28.175403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:32288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.457 [2024-11-26 19:50:28.175410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:20.457 [2024-11-26 19:50:28.175422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:32296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.457 [2024-11-26 19:50:28.175428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:20.457 [2024-11-26 19:50:28.175440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:32304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.457 [2024-11-26 19:50:28.175447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:20.457 [2024-11-26 19:50:28.175459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:32312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.457 [2024-11-26 19:50:28.175465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:20.457 [2024-11-26 19:50:28.175478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:32320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.457 [2024-11-26 19:50:28.175484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:20.457 [2024-11-26 19:50:28.175496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:32328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.457 [2024-11-26 19:50:28.175505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:20.457 [2024-11-26 19:50:28.175517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:32336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.457 [2024-11-26 19:50:28.175524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:20.457 [2024-11-26 19:50:28.175536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:32344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.457 [2024-11-26 19:50:28.175543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:20.457 [2024-11-26 19:50:28.175555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:32352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.457 [2024-11-26 19:50:28.175561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:20.457 [2024-11-26 19:50:28.175573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:32360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.457 [2024-11-26 19:50:28.175580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:20.457 [2024-11-26 19:50:28.175592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:32368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.457 [2024-11-26 19:50:28.175599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:20.457 [2024-11-26 19:50:28.175611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:32376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.457 [2024-11-26 19:50:28.175618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:20.457 [2024-11-26 19:50:28.175630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:32384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.457 [2024-11-26 19:50:28.175637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:20.457 [2024-11-26 19:50:28.175649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:32392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.457 [2024-11-26 19:50:28.175656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:20.457 [2024-11-26 19:50:28.175668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:32400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.457 [2024-11-26 19:50:28.175675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:20.457 [2024-11-26 19:50:28.175688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.457 [2024-11-26 19:50:28.175695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:20.458 [2024-11-26 19:50:28.175708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:32736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.458 [2024-11-26 19:50:28.175716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:20.458 [2024-11-26 19:50:28.175728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:32744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.458 [2024-11-26 19:50:28.175740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:20.458 [2024-11-26 19:50:28.175755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.458 [2024-11-26 19:50:28.175761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:20.458 [2024-11-26 19:50:28.175782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:32760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.458 [2024-11-26 19:50:28.175788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:20.458 [2024-11-26 19:50:28.175800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:32768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.458 [2024-11-26 19:50:28.175807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:20.458 [2024-11-26 19:50:28.175819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.458 [2024-11-26 19:50:28.175826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:20.458 [2024-11-26 19:50:28.175838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:32784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.458 [2024-11-26 19:50:28.175844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:20.458 [2024-11-26 19:50:28.175856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:32792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.458 [2024-11-26 19:50:28.175863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:20.458 [2024-11-26 19:50:28.175875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:32800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.458 [2024-11-26 19:50:28.175882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:20.458 [2024-11-26 19:50:28.175893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:32808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.458 [2024-11-26 19:50:28.175900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:20.458 [2024-11-26 19:50:28.175912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:32816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.458 [2024-11-26 19:50:28.175919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.458 [2024-11-26 19:50:28.175931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:32824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.458 [2024-11-26 19:50:28.175938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:20.458 [2024-11-26 19:50:28.175950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:32832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.458 [2024-11-26 19:50:28.175957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:20.458 [2024-11-26 19:50:28.175969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:32840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.458 [2024-11-26 19:50:28.175975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:20.458 [2024-11-26 19:50:28.175990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:32848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.458 [2024-11-26 19:50:28.175997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:20.458 [2024-11-26 19:50:28.176009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:32408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.458 [2024-11-26 19:50:28.176016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:20.458 [2024-11-26 19:50:28.176028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:32416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.458 [2024-11-26 19:50:28.176035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:20.458 [2024-11-26 19:50:28.176047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:32424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.458 [2024-11-26 19:50:28.176054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:20.458 [2024-11-26 19:50:28.176066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.458 [2024-11-26 19:50:28.176073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:20.458 [2024-11-26 19:50:28.176085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:32440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.458 [2024-11-26 19:50:28.176092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:20.458 [2024-11-26 19:50:28.176104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:32448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.458 [2024-11-26 19:50:28.176111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:20.458 [2024-11-26 19:50:28.176123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:32456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.458 [2024-11-26 19:50:28.176129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:20.458 [2024-11-26 19:50:28.176142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:32464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.458 [2024-11-26 19:50:28.176148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:20.458 [2024-11-26 19:50:28.176160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:32472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.458 [2024-11-26 19:50:28.176167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:20.458 [2024-11-26 19:50:28.176179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.458 [2024-11-26 19:50:28.176186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:20.458 [2024-11-26 19:50:28.176198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:32488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.458 [2024-11-26 19:50:28.176204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:20.458 [2024-11-26 19:50:28.176217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:32496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.458 [2024-11-26 19:50:28.176226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:20.458 [2024-11-26 19:50:28.177258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:32504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.458 [2024-11-26 19:50:28.177275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:20.458 [2024-11-26 19:50:28.177291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:32512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.458 [2024-11-26 19:50:28.177298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:20.458 [2024-11-26 19:50:28.177311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.458 [2024-11-26 19:50:28.177318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:20.458 [2024-11-26 19:50:28.177331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:32528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.458 [2024-11-26 19:50:28.177338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:20.458 [2024-11-26 19:50:28.177350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:32856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.458 [2024-11-26 19:50:28.177357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:20.458 [2024-11-26 19:50:28.177369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:32864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.458 [2024-11-26 19:50:28.177375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:20.458 [2024-11-26 19:50:28.177387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:32872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.458 [2024-11-26 19:50:28.177394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:20.458 [2024-11-26 19:50:28.177406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:32880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.458 [2024-11-26 19:50:28.177413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:20.458 [2024-11-26 19:50:28.177425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.458 [2024-11-26 19:50:28.177432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:20.458 [2024-11-26 19:50:28.177444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:32896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.459 [2024-11-26 19:50:28.177451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:20.459 [2024-11-26 19:50:28.177463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:32904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.459 [2024-11-26 19:50:28.177470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:20.459 [2024-11-26 19:50:28.177581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:32912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.459 [2024-11-26 19:50:28.177598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:20.459 [2024-11-26 19:50:28.177611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:32920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.459 [2024-11-26 19:50:28.177618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:20.459 [2024-11-26 19:50:28.177631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:32928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.459 [2024-11-26 19:50:28.177637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:20.459 [2024-11-26 19:50:28.177650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:32936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.459 [2024-11-26 19:50:28.177656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:20.459 [2024-11-26 19:50:28.177669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:32944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.459 [2024-11-26 19:50:28.177675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.459 [2024-11-26 19:50:28.177687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:32952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.459 [2024-11-26 19:50:28.177694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:20.459 [2024-11-26 19:50:28.177706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:32960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.459 [2024-11-26 19:50:28.177713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:20.459 [2024-11-26 19:50:28.177725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:32968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.459 [2024-11-26 19:50:28.177732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:20.459 [2024-11-26 19:50:28.177744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:32976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.459 [2024-11-26 19:50:28.177751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:20.459 12801.50 IOPS, 50.01 MiB/s [2024-11-26T19:51:15.706Z] 12768.11 IOPS, 49.88 MiB/s [2024-11-26T19:51:15.706Z] 12772.40 IOPS, 49.89 MiB/s [2024-11-26T19:51:15.706Z] 12773.91 IOPS, 49.90 MiB/s [2024-11-26T19:51:15.706Z] 12757.00 IOPS, 49.83 MiB/s [2024-11-26T19:51:15.706Z] 12720.31 IOPS, 49.69 MiB/s [2024-11-26T19:51:15.706Z] 12682.00 IOPS, 49.54 MiB/s [2024-11-26T19:51:15.706Z] [2024-11-26 19:50:34.682341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:27168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.459 [2024-11-26 19:50:34.682391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:20.459 [2024-11-26 19:50:34.682424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:27176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.459 [2024-11-26 19:50:34.682432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:20.459 [2024-11-26 19:50:34.682446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:27184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.459 [2024-11-26 19:50:34.682453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.459 [2024-11-26 19:50:34.682484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:27192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.459 [2024-11-26 19:50:34.682492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.459 [2024-11-26 19:50:34.682504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:27200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.459 [2024-11-26 19:50:34.682511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:20.459 [2024-11-26 19:50:34.682524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:27208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.459 [2024-11-26 19:50:34.682530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:20.459 [2024-11-26 19:50:34.682543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:27216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.459 [2024-11-26 19:50:34.682549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:20.459 [2024-11-26 19:50:34.682562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:27224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.459 [2024-11-26 19:50:34.682569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:20.459 [2024-11-26 19:50:34.682583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:27232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.459 [2024-11-26 19:50:34.682590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:20.459 [2024-11-26 19:50:34.682602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:27240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.459 [2024-11-26 19:50:34.682609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:20.459 [2024-11-26 19:50:34.682622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:27248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.459 [2024-11-26 19:50:34.682628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:20.459 [2024-11-26 19:50:34.682641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:27256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.459 [2024-11-26 19:50:34.682648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:20.459 [2024-11-26 19:50:34.682660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:27264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.459 [2024-11-26 19:50:34.682667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:20.459 [2024-11-26 19:50:34.682680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:27272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.459 [2024-11-26 19:50:34.682687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:20.459 [2024-11-26 19:50:34.682699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:26592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.459 [2024-11-26 19:50:34.682706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:20.459 [2024-11-26 19:50:34.682719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:26600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.459 [2024-11-26 19:50:34.682730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:20.459 [2024-11-26 19:50:34.682742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:26608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.459 [2024-11-26 19:50:34.682749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:20.459 [2024-11-26 19:50:34.682763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:26616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.459 [2024-11-26 19:50:34.682780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:20.459 [2024-11-26 19:50:34.682793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:26624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.459 [2024-11-26 19:50:34.682800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:20.459 [2024-11-26 19:50:34.682812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:26632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.459 [2024-11-26 19:50:34.682819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:20.459 [2024-11-26 19:50:34.682832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:26640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.459 [2024-11-26 19:50:34.682839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:20.459 [2024-11-26 19:50:34.682852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:26648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.459 [2024-11-26 19:50:34.682859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:20.459 [2024-11-26 19:50:34.682871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:26656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.459 [2024-11-26 19:50:34.682878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:20.459 [2024-11-26 19:50:34.682891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:26664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.460 [2024-11-26 19:50:34.682898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:20.460 [2024-11-26 19:50:34.682911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:26672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.460 [2024-11-26 19:50:34.682918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:20.460 [2024-11-26 19:50:34.682931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:26680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.460 [2024-11-26 19:50:34.682938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:20.460 [2024-11-26 19:50:34.682951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:26688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.460 [2024-11-26 19:50:34.682958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:20.460 [2024-11-26 19:50:34.682970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:26696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.460 [2024-11-26 19:50:34.682981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:20.460 [2024-11-26 19:50:34.682994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:26704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.460 [2024-11-26 19:50:34.683001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:20.460 [2024-11-26 19:50:34.683014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:26712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.460 [2024-11-26 19:50:34.683021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:20.460 [2024-11-26 19:50:34.683033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:27280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.460 [2024-11-26 19:50:34.683040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:20.460 [2024-11-26 19:50:34.683053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:27288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.460 [2024-11-26 19:50:34.683060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:20.460 [2024-11-26 19:50:34.683291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:27296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.460 [2024-11-26 19:50:34.683303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:20.460 [2024-11-26 19:50:34.683320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:27304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.460 [2024-11-26 19:50:34.683327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:20.460 [2024-11-26 19:50:34.683342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:27312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.460 [2024-11-26 19:50:34.683349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:20.460 [2024-11-26 19:50:34.683364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:27320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.460 [2024-11-26 19:50:34.683371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.460 [2024-11-26 19:50:34.683385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:27328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.460 [2024-11-26 19:50:34.683392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:20.460 [2024-11-26 19:50:34.683407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:27336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.460 [2024-11-26 19:50:34.683414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:20.460 [2024-11-26 19:50:34.683428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:27344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.460 [2024-11-26 19:50:34.683435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:20.460 [2024-11-26 19:50:34.683450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:27352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.460 [2024-11-26 19:50:34.683457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:20.460 [2024-11-26 19:50:34.683477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:27360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.460 [2024-11-26 19:50:34.683484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:20.460 [2024-11-26 19:50:34.683499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:27368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.460 [2024-11-26 19:50:34.683506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:20.460 [2024-11-26 19:50:34.683520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:27376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.460 [2024-11-26 19:50:34.683527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:20.460 [2024-11-26 19:50:34.683542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:27384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.460 [2024-11-26 19:50:34.683549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:20.460 [2024-11-26 19:50:34.683563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.460 [2024-11-26 19:50:34.683570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:20.460 [2024-11-26 19:50:34.683585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:26720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.460 [2024-11-26 19:50:34.683592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:20.460 [2024-11-26 19:50:34.683606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:26728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.460 [2024-11-26 19:50:34.683613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:20.460 [2024-11-26 19:50:34.683627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:26736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.460 [2024-11-26 19:50:34.683634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:20.460 [2024-11-26 19:50:34.683649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:26744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.460 [2024-11-26 19:50:34.683656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:20.460 [2024-11-26 19:50:34.683671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:26752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.460 [2024-11-26 19:50:34.683678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:20.460 [2024-11-26 19:50:34.683692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:26760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.460 [2024-11-26 19:50:34.683699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:20.460 [2024-11-26 19:50:34.683713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:26768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.460 [2024-11-26 19:50:34.683720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:20.460 [2024-11-26 19:50:34.683738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.460 [2024-11-26 19:50:34.683745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:20.460 [2024-11-26 19:50:34.683759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.460 [2024-11-26 19:50:34.683776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:20.460 [2024-11-26 19:50:34.683790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:26792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.460 [2024-11-26 19:50:34.683797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:20.460 [2024-11-26 19:50:34.683812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:26800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.460 [2024-11-26 19:50:34.683818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:20.461 [2024-11-26 19:50:34.683833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:26808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.461 [2024-11-26 19:50:34.683840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:20.461 [2024-11-26 19:50:34.683854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:26816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.461 [2024-11-26 19:50:34.683861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:20.461 [2024-11-26 19:50:34.683875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:26824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.461 [2024-11-26 19:50:34.683882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:20.461 [2024-11-26 19:50:34.683897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:26832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.461 [2024-11-26 19:50:34.683904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:20.461 [2024-11-26 19:50:34.683918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:26840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.461 [2024-11-26 19:50:34.683925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:20.461 [2024-11-26 19:50:34.683940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:27400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.461 [2024-11-26 19:50:34.683947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:20.461 [2024-11-26 19:50:34.683961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:27408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.461 [2024-11-26 19:50:34.683968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:20.461 [2024-11-26 19:50:34.683982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:27416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.461 [2024-11-26 19:50:34.683990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:20.461 [2024-11-26 19:50:34.684009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:27424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.461 [2024-11-26 19:50:34.684021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:20.461 [2024-11-26 19:50:34.684035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:27432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.461 [2024-11-26 19:50:34.684043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:20.461 [2024-11-26 19:50:34.684057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:27440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.461 [2024-11-26 19:50:34.684064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:20.461 [2024-11-26 19:50:34.684078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.461 [2024-11-26 19:50:34.684085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.461 [2024-11-26 19:50:34.684099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:27456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.461 [2024-11-26 19:50:34.684106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:20.461 [2024-11-26 19:50:34.684121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:27464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.461 [2024-11-26 19:50:34.684127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:20.461 [2024-11-26 19:50:34.684142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:27472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.461 [2024-11-26 19:50:34.684149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:20.461 [2024-11-26 19:50:34.684163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:27480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.461 [2024-11-26 19:50:34.684170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:20.461 [2024-11-26 19:50:34.684185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:26848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.461 [2024-11-26 19:50:34.684191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:20.461 [2024-11-26 19:50:34.684205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:26856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.461 [2024-11-26 19:50:34.684212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:20.461 [2024-11-26 19:50:34.684227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:26864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.461 [2024-11-26 19:50:34.684233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:20.461 [2024-11-26 19:50:34.684248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:26872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.461 [2024-11-26 19:50:34.684255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:20.461 [2024-11-26 19:50:34.684270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:26880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.461 [2024-11-26 19:50:34.684280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:20.461 [2024-11-26 19:50:34.684295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:26888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.461 [2024-11-26 19:50:34.684302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:20.461 [2024-11-26 19:50:34.684316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.461 [2024-11-26 19:50:34.684324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:20.461 [2024-11-26 19:50:34.684338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:26904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.461 [2024-11-26 19:50:34.684345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:20.461 [2024-11-26 19:50:34.684360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:26912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.461 [2024-11-26 19:50:34.684367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:20.461 [2024-11-26 19:50:34.684382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:26920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.461 [2024-11-26 19:50:34.684388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:20.461 [2024-11-26 19:50:34.684403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:26928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.461 [2024-11-26 19:50:34.684409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:20.461 [2024-11-26 19:50:34.684424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:26936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.461 [2024-11-26 19:50:34.684431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:20.461 [2024-11-26 19:50:34.684445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:26944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.461 [2024-11-26 19:50:34.684452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:20.461 [2024-11-26 19:50:34.684466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:26952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.461 [2024-11-26 19:50:34.684473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:20.461 [2024-11-26 19:50:34.684488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.461 [2024-11-26 19:50:34.684495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:20.461 [2024-11-26 19:50:34.684509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:26968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.461 [2024-11-26 19:50:34.684516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:20.461 [2024-11-26 19:50:34.684775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.461 [2024-11-26 19:50:34.684786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:20.461 [2024-11-26 19:50:34.684808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:27496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.461 [2024-11-26 19:50:34.684815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:20.461 [2024-11-26 19:50:34.684832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:27504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.461 [2024-11-26 19:50:34.684839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:20.461 [2024-11-26 19:50:34.684857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:27512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.461 [2024-11-26 19:50:34.684864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:20.461 [2024-11-26 19:50:34.684881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:27520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.462 [2024-11-26 19:50:34.684888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:20.462 [2024-11-26 19:50:34.684905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:27528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.462 [2024-11-26 19:50:34.684912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:20.462 [2024-11-26 19:50:34.684929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:27536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.462 [2024-11-26 19:50:34.684937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:20.462 [2024-11-26 19:50:34.684954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:27544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.462 [2024-11-26 19:50:34.684961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:20.462 [2024-11-26 19:50:34.684978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:26976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.462 [2024-11-26 19:50:34.684985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:20.462 [2024-11-26 19:50:34.685002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:26984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.462 [2024-11-26 19:50:34.685010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:20.462 [2024-11-26 19:50:34.685027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:26992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.462 [2024-11-26 19:50:34.685034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:20.462 [2024-11-26 19:50:34.685052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:27000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.462 [2024-11-26 19:50:34.685059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.462 [2024-11-26 19:50:34.685076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:27008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.462 [2024-11-26 19:50:34.685083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:20.462 [2024-11-26 19:50:34.685103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:27016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.462 [2024-11-26 19:50:34.685110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:20.462 [2024-11-26 19:50:34.685127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:27024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.462 [2024-11-26 19:50:34.685134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:20.462 [2024-11-26 19:50:34.685151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:27032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.462 [2024-11-26 19:50:34.685158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:20.462 [2024-11-26 19:50:34.685175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:27040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.462 [2024-11-26 19:50:34.685182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:20.462 [2024-11-26 19:50:34.685199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:27048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.462 [2024-11-26 19:50:34.685206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:20.462 [2024-11-26 19:50:34.685223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:27056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.462 [2024-11-26 19:50:34.685230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:20.462 [2024-11-26 19:50:34.685247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.462 [2024-11-26 19:50:34.685254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:20.462 [2024-11-26 19:50:34.685271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:27072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.462 [2024-11-26 19:50:34.685278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:20.462 [2024-11-26 19:50:34.685296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:27080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.462 [2024-11-26 19:50:34.685303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:20.462 [2024-11-26 19:50:34.685320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:27088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.462 [2024-11-26 19:50:34.685327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:20.462 [2024-11-26 19:50:34.685344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:27096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.462 [2024-11-26 19:50:34.685351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:20.462 [2024-11-26 19:50:34.685369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:27104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.462 [2024-11-26 19:50:34.685376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:20.462 [2024-11-26 19:50:34.685393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:27112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.462 [2024-11-26 19:50:34.685403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:20.462 [2024-11-26 19:50:34.685420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:27120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.462 [2024-11-26 19:50:34.685428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:20.462 [2024-11-26 19:50:34.685445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:27128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.462 [2024-11-26 19:50:34.685452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:20.462 [2024-11-26 19:50:34.685470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:27136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.462 [2024-11-26 19:50:34.685477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:20.462 [2024-11-26 19:50:34.685495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:27144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.462 [2024-11-26 19:50:34.685502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:20.462 [2024-11-26 19:50:34.685519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:27152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.462 [2024-11-26 19:50:34.685526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:20.462 [2024-11-26 19:50:34.685543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:27160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.462 [2024-11-26 19:50:34.685550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:20.462 [2024-11-26 19:50:34.685575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.462 [2024-11-26 19:50:34.685583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:20.462 [2024-11-26 19:50:34.685600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:27560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.462 [2024-11-26 19:50:34.685607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:20.462 [2024-11-26 19:50:34.685624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:27568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.462 [2024-11-26 19:50:34.685631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:20.462 [2024-11-26 19:50:34.685648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:27576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.463 [2024-11-26 19:50:34.685655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:20.463 [2024-11-26 19:50:34.685672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:27584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.463 [2024-11-26 19:50:34.685679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:20.463 [2024-11-26 19:50:34.685696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:27592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.463 [2024-11-26 19:50:34.685707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:20.463 [2024-11-26 19:50:34.685724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:27600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.463 [2024-11-26 19:50:34.685731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:20.463 [2024-11-26 19:50:34.685748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.463 [2024-11-26 19:50:34.685755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:20.463 12236.53 IOPS, 47.80 MiB/s [2024-11-26T19:51:15.710Z] 11868.69 IOPS, 46.36 MiB/s [2024-11-26T19:51:15.710Z] 11912.65 IOPS, 46.53 MiB/s [2024-11-26T19:51:15.710Z] 11950.83 IOPS, 46.68 MiB/s [2024-11-26T19:51:15.710Z] 11982.89 IOPS, 46.81 MiB/s [2024-11-26T19:51:15.710Z] 12011.35 IOPS, 46.92 MiB/s [2024-11-26T19:51:15.710Z] 12036.71 IOPS, 47.02 MiB/s [2024-11-26T19:51:15.710Z] [2024-11-26 19:50:41.602959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.463 [2024-11-26 19:50:41.603015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:20.463 [2024-11-26 19:50:41.603047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.463 [2024-11-26 19:50:41.603055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:20.463 [2024-11-26 19:50:41.603068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.463 [2024-11-26 19:50:41.603075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:20.463 [2024-11-26 19:50:41.603101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:98224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.463 [2024-11-26 19:50:41.603107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:20.463 [2024-11-26 19:50:41.603120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.463 [2024-11-26 19:50:41.603126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:20.463 [2024-11-26 19:50:41.603138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.463 [2024-11-26 19:50:41.603145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.463 [2024-11-26 19:50:41.603157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:98248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.463 [2024-11-26 19:50:41.603164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:20.463 [2024-11-26 19:50:41.603176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.463 [2024-11-26 19:50:41.603183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:20.463 [2024-11-26 19:50:41.603196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.463 [2024-11-26 19:50:41.603202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:20.463 [2024-11-26 19:50:41.603230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.463 [2024-11-26 19:50:41.603237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:20.463 [2024-11-26 19:50:41.603250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.463 [2024-11-26 19:50:41.603257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:20.463 [2024-11-26 19:50:41.603269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.463 [2024-11-26 19:50:41.603276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:20.463 [2024-11-26 19:50:41.603288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.463 [2024-11-26 19:50:41.603295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:20.463 [2024-11-26 19:50:41.603307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.463 [2024-11-26 19:50:41.603314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:20.463 [2024-11-26 19:50:41.603327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.463 [2024-11-26 19:50:41.603334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:20.463 [2024-11-26 19:50:41.603346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.463 [2024-11-26 19:50:41.603353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:20.463 [2024-11-26 19:50:41.603365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:97816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.463 [2024-11-26 19:50:41.603373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:20.463 [2024-11-26 19:50:41.603387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.463 [2024-11-26 19:50:41.603394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:20.463 [2024-11-26 19:50:41.603407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:97832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.463 [2024-11-26 19:50:41.603414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:20.463 [2024-11-26 19:50:41.603427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:97840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.463 [2024-11-26 19:50:41.603434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:20.463 [2024-11-26 19:50:41.603447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.463 [2024-11-26 19:50:41.603454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:20.463 [2024-11-26 19:50:41.603471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:97856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.463 [2024-11-26 19:50:41.603480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:20.463 [2024-11-26 19:50:41.603492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:97864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.463 [2024-11-26 19:50:41.603499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:20.463 [2024-11-26 19:50:41.603512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.463 [2024-11-26 19:50:41.603519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:20.463 [2024-11-26 19:50:41.603534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.463 [2024-11-26 19:50:41.603542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:20.463 [2024-11-26 19:50:41.603554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.463 [2024-11-26 19:50:41.603561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:20.463 [2024-11-26 19:50:41.603574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.463 [2024-11-26 19:50:41.603580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:20.463 [2024-11-26 19:50:41.603592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.463 [2024-11-26 19:50:41.603599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:20.463 [2024-11-26 19:50:41.603611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.463 [2024-11-26 19:50:41.603618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:20.463 [2024-11-26 19:50:41.603630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:98368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.463 [2024-11-26 19:50:41.603637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:20.463 [2024-11-26 19:50:41.603649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.463 [2024-11-26 19:50:41.603656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:20.463 [2024-11-26 19:50:41.603669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.463 [2024-11-26 19:50:41.603676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:20.463 [2024-11-26 19:50:41.603688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.463 [2024-11-26 19:50:41.603695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:20.464 [2024-11-26 19:50:41.603710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.464 [2024-11-26 19:50:41.603723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:20.464 [2024-11-26 19:50:41.603736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.464 [2024-11-26 19:50:41.603743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:20.464 [2024-11-26 19:50:41.603755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.464 [2024-11-26 19:50:41.603762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:20.464 [2024-11-26 19:50:41.603785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.464 [2024-11-26 19:50:41.603792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:20.464 [2024-11-26 19:50:41.603805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.464 [2024-11-26 19:50:41.603812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.464 [2024-11-26 19:50:41.603824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.464 [2024-11-26 19:50:41.603831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:20.464 [2024-11-26 19:50:41.603844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.464 [2024-11-26 19:50:41.603851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:20.464 [2024-11-26 19:50:41.603864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:97880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.464 [2024-11-26 19:50:41.603871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:20.464 [2024-11-26 19:50:41.603884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:97888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.464 [2024-11-26 19:50:41.603891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:20.464 [2024-11-26 19:50:41.603904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:97896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.464 [2024-11-26 19:50:41.603911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:20.464 [2024-11-26 19:50:41.603924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.464 [2024-11-26 19:50:41.603931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:20.464 [2024-11-26 19:50:41.603944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:97912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.464 [2024-11-26 19:50:41.603951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:20.464 [2024-11-26 19:50:41.603964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:97920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.464 [2024-11-26 19:50:41.603975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:20.464 [2024-11-26 19:50:41.603988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:97928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.464 [2024-11-26 19:50:41.603996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:20.464 [2024-11-26 19:50:41.604008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.464 [2024-11-26 19:50:41.604015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:20.464 [2024-11-26 19:50:41.604028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.464 [2024-11-26 19:50:41.604035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:20.464 [2024-11-26 19:50:41.604048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.464 [2024-11-26 19:50:41.604056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:20.464 [2024-11-26 19:50:41.604069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.464 [2024-11-26 19:50:41.604076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:20.464 [2024-11-26 19:50:41.604089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:97968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.464 [2024-11-26 19:50:41.604096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:20.464 [2024-11-26 19:50:41.604109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.464 [2024-11-26 19:50:41.604116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:20.464 [2024-11-26 19:50:41.604129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.464 [2024-11-26 19:50:41.604136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:20.464 [2024-11-26 19:50:41.604148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.464 [2024-11-26 19:50:41.604155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:20.464 [2024-11-26 19:50:41.604168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.464 [2024-11-26 19:50:41.604175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:20.464 [2024-11-26 19:50:41.604190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.464 [2024-11-26 19:50:41.604197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:20.464 [2024-11-26 19:50:41.604210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.464 [2024-11-26 19:50:41.604217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:20.464 [2024-11-26 19:50:41.604232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.464 [2024-11-26 19:50:41.604239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:20.464 [2024-11-26 19:50:41.604252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.464 [2024-11-26 19:50:41.604259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:20.464 [2024-11-26 19:50:41.604272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.464 [2024-11-26 19:50:41.604278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:20.464 [2024-11-26 19:50:41.604291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.464 [2024-11-26 19:50:41.604298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:20.464 [2024-11-26 19:50:41.604311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.465 [2024-11-26 19:50:41.604318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:20.465 [2024-11-26 19:50:41.604330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.465 [2024-11-26 19:50:41.604337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:20.465 [2024-11-26 19:50:41.604350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.465 [2024-11-26 19:50:41.604356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:20.465 [2024-11-26 19:50:41.604369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.465 [2024-11-26 19:50:41.604377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:20.465 [2024-11-26 19:50:41.604390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.465 [2024-11-26 19:50:41.604397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:20.465 [2024-11-26 19:50:41.604409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.465 [2024-11-26 19:50:41.604416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:20.465 [2024-11-26 19:50:41.604429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.465 [2024-11-26 19:50:41.604436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.465 [2024-11-26 19:50:41.604449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.465 [2024-11-26 19:50:41.604456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:20.465 [2024-11-26 19:50:41.604472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.465 [2024-11-26 19:50:41.604479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:20.465 [2024-11-26 19:50:41.604492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.465 [2024-11-26 19:50:41.604499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:20.465 [2024-11-26 19:50:41.604511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.465 [2024-11-26 19:50:41.604518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:20.465 [2024-11-26 19:50:41.604531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.465 [2024-11-26 19:50:41.604538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:20.465 [2024-11-26 19:50:41.604550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.465 [2024-11-26 19:50:41.604557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:20.465 [2024-11-26 19:50:41.604570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:98032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.465 [2024-11-26 19:50:41.604577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:20.465 [2024-11-26 19:50:41.604589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:98040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.465 [2024-11-26 19:50:41.604596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:20.465 [2024-11-26 19:50:41.604610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:98048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.465 [2024-11-26 19:50:41.604617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:20.465 [2024-11-26 19:50:41.604630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.465 [2024-11-26 19:50:41.604636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:20.465 [2024-11-26 19:50:41.604649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:98064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.465 [2024-11-26 19:50:41.604655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:20.465 [2024-11-26 19:50:41.604671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.465 [2024-11-26 19:50:41.604678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:20.465 [2024-11-26 19:50:41.604691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.465 [2024-11-26 19:50:41.604698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:20.465 [2024-11-26 19:50:41.604711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.465 [2024-11-26 19:50:41.604721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:20.465 [2024-11-26 19:50:41.604734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.465 [2024-11-26 19:50:41.604741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:20.465 [2024-11-26 19:50:41.604754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.465 [2024-11-26 19:50:41.604761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:20.465 [2024-11-26 19:50:41.604782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.465 [2024-11-26 19:50:41.604789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:20.465 [2024-11-26 19:50:41.604802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.465 [2024-11-26 19:50:41.604809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:20.465 [2024-11-26 19:50:41.604821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.465 [2024-11-26 19:50:41.604828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:20.465 [2024-11-26 19:50:41.604844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.465 [2024-11-26 19:50:41.604852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:20.465 [2024-11-26 19:50:41.604864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.465 [2024-11-26 19:50:41.604872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:20.465 [2024-11-26 19:50:41.604885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.465 [2024-11-26 19:50:41.604891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:20.465 [2024-11-26 19:50:41.604904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.465 [2024-11-26 19:50:41.604911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:20.465 [2024-11-26 19:50:41.604923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.465 [2024-11-26 19:50:41.604931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:20.465 [2024-11-26 19:50:41.604943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.465 [2024-11-26 19:50:41.604950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:20.465 [2024-11-26 19:50:41.604963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.465 [2024-11-26 19:50:41.604973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:20.466 [2024-11-26 19:50:41.604986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.466 [2024-11-26 19:50:41.604994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:20.466 [2024-11-26 19:50:41.605007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.466 [2024-11-26 19:50:41.605014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:20.466 [2024-11-26 19:50:41.605027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:98080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.466 [2024-11-26 19:50:41.605034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:20.466 [2024-11-26 19:50:41.605047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:98088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.466 [2024-11-26 19:50:41.605055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:20.466 [2024-11-26 19:50:41.605068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.466 [2024-11-26 19:50:41.605075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:20.466 [2024-11-26 19:50:41.605088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:98104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.466 [2024-11-26 19:50:41.605095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:20.466 [2024-11-26 19:50:41.605108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:98112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.466 [2024-11-26 19:50:41.605115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.466 [2024-11-26 19:50:41.605128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:98120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.466 [2024-11-26 19:50:41.605135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:20.466 [2024-11-26 19:50:41.605148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.466 [2024-11-26 19:50:41.605155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:20.466 [2024-11-26 19:50:41.605168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.466 [2024-11-26 19:50:41.605175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:20.466 [2024-11-26 19:50:41.605188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.466 [2024-11-26 19:50:41.605195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:20.466 [2024-11-26 19:50:41.605207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.466 [2024-11-26 19:50:41.605214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:20.466 [2024-11-26 19:50:41.605231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.466 [2024-11-26 19:50:41.605238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:20.466 [2024-11-26 19:50:41.605250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.466 [2024-11-26 19:50:41.605258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:20.466 [2024-11-26 19:50:41.605271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.466 [2024-11-26 19:50:41.605279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:20.466 [2024-11-26 19:50:41.605292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.466 [2024-11-26 19:50:41.605299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:20.466 [2024-11-26 19:50:41.605819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.466 [2024-11-26 19:50:41.605835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:20.466 [2024-11-26 19:50:41.605856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.466 [2024-11-26 19:50:41.605863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:20.466 [2024-11-26 19:50:41.605882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.466 [2024-11-26 19:50:41.605889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:20.466 [2024-11-26 19:50:41.605914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.466 [2024-11-26 19:50:41.605922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:20.466 [2024-11-26 19:50:41.605940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.466 [2024-11-26 19:50:41.605947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:20.466 [2024-11-26 19:50:41.605966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.466 [2024-11-26 19:50:41.605973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:20.466 [2024-11-26 19:50:41.605992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.466 [2024-11-26 19:50:41.605999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:20.466 [2024-11-26 19:50:41.606018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.466 [2024-11-26 19:50:41.606026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:20.466 [2024-11-26 19:50:41.606060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.466 [2024-11-26 19:50:41.606069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:20.466 [2024-11-26 19:50:41.606088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.466 [2024-11-26 19:50:41.606096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:20.466 [2024-11-26 19:50:41.606114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.466 [2024-11-26 19:50:41.606121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:20.466 [2024-11-26 19:50:41.606140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.466 [2024-11-26 19:50:41.606148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:20.466 [2024-11-26 19:50:41.606166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.466 [2024-11-26 19:50:41.606174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:20.466 [2024-11-26 19:50:41.606192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.466 [2024-11-26 19:50:41.606199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:20.466 [2024-11-26 19:50:41.606217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.466 [2024-11-26 19:50:41.606224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:20.466 [2024-11-26 19:50:41.606242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.466 [2024-11-26 19:50:41.606249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:20.466 [2024-11-26 19:50:41.606270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.466 [2024-11-26 19:50:41.606278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:20.466 11726.68 IOPS, 45.81 MiB/s [2024-11-26T19:51:15.713Z] 11216.83 IOPS, 43.82 MiB/s [2024-11-26T19:51:15.713Z] 10749.46 IOPS, 41.99 MiB/s [2024-11-26T19:51:15.713Z] 10319.48 IOPS, 40.31 MiB/s [2024-11-26T19:51:15.713Z] 9922.58 IOPS, 38.76 MiB/s [2024-11-26T19:51:15.713Z] 9555.07 IOPS, 37.32 MiB/s [2024-11-26T19:51:15.713Z] 9213.82 IOPS, 35.99 MiB/s [2024-11-26T19:51:15.713Z] 9140.97 IOPS, 35.71 MiB/s [2024-11-26T19:51:15.713Z] 9251.33 IOPS, 36.14 MiB/s [2024-11-26T19:51:15.713Z] 9355.35 IOPS, 36.54 MiB/s [2024-11-26T19:51:15.713Z] 9452.50 IOPS, 36.92 MiB/s [2024-11-26T19:51:15.713Z] 9544.00 IOPS, 37.28 MiB/s [2024-11-26T19:51:15.713Z] 9558.59 IOPS, 37.34 MiB/s [2024-11-26T19:51:15.714Z] [2024-11-26 19:50:54.743125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.467 [2024-11-26 19:50:54.743176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:20.467 [2024-11-26 19:50:54.743209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.467 [2024-11-26 19:50:54.743218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:20.467 [2024-11-26 19:50:54.743249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.467 [2024-11-26 19:50:54.743256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:20.467 [2024-11-26 19:50:54.743269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:23064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.467 [2024-11-26 19:50:54.743276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:20.467 [2024-11-26 19:50:54.743288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:23072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.467 [2024-11-26 19:50:54.743295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:20.467 [2024-11-26 19:50:54.743308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:23080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.467 [2024-11-26 19:50:54.743315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:20.467 [2024-11-26 19:50:54.743327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.467 [2024-11-26 19:50:54.743334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:20.467 [2024-11-26 19:50:54.743346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.467 [2024-11-26 19:50:54.743353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:20.467 [2024-11-26 19:50:54.743365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.467 [2024-11-26 19:50:54.743372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:20.467 [2024-11-26 19:50:54.743385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.467 [2024-11-26 19:50:54.743392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:20.467 [2024-11-26 19:50:54.743404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.467 [2024-11-26 19:50:54.743411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:20.467 [2024-11-26 19:50:54.743423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.467 [2024-11-26 19:50:54.743430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:20.467 [2024-11-26 19:50:54.743442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.467 [2024-11-26 19:50:54.743449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:20.467 [2024-11-26 19:50:54.743461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.467 [2024-11-26 19:50:54.743468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:20.467 [2024-11-26 19:50:54.743480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.467 [2024-11-26 19:50:54.743492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:20.467 [2024-11-26 19:50:54.743505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:22584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.467 [2024-11-26 19:50:54.743512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:20.467 [2024-11-26 19:50:54.743525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.467 [2024-11-26 19:50:54.743532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:20.467 [2024-11-26 19:50:54.743545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.467 [2024-11-26 19:50:54.743553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:20.467 [2024-11-26 19:50:54.743566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:22608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.467 [2024-11-26 19:50:54.743573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:20.467 [2024-11-26 19:50:54.743585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.467 [2024-11-26 19:50:54.743592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:20.467 [2024-11-26 19:50:54.743605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.467 [2024-11-26 19:50:54.743612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:20.467 [2024-11-26 19:50:54.743624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.467 [2024-11-26 19:50:54.743631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:20.467 [2024-11-26 19:50:54.743644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.467 [2024-11-26 19:50:54.743651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:20.467 [2024-11-26 19:50:54.743663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.467 [2024-11-26 19:50:54.743670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:20.467 [2024-11-26 19:50:54.743700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.467 [2024-11-26 19:50:54.743709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.467 [2024-11-26 19:50:54.743718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.467 [2024-11-26 19:50:54.743724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.467 [2024-11-26 19:50:54.743733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.467 [2024-11-26 19:50:54.743743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.467 [2024-11-26 19:50:54.743752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:23128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.467 [2024-11-26 19:50:54.743759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.467 [2024-11-26 19:50:54.743777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.467 [2024-11-26 19:50:54.743784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.467 [2024-11-26 19:50:54.743792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.467 [2024-11-26 19:50:54.743799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.467 [2024-11-26 19:50:54.743807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.468 [2024-11-26 19:50:54.743814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.468 [2024-11-26 19:50:54.743823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.468 [2024-11-26 19:50:54.743829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.468 [2024-11-26 19:50:54.743838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.468 [2024-11-26 19:50:54.743844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.468 [2024-11-26 19:50:54.743852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.468 [2024-11-26 19:50:54.743859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.468 [2024-11-26 19:50:54.743868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.468 [2024-11-26 19:50:54.743874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.468 [2024-11-26 19:50:54.743882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.468 [2024-11-26 19:50:54.743889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.468 [2024-11-26 19:50:54.743898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.468 [2024-11-26 19:50:54.743905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.468 [2024-11-26 19:50:54.743913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:22696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.468 [2024-11-26 19:50:54.743920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.468 [2024-11-26 19:50:54.743928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.468 [2024-11-26 19:50:54.743935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.468 [2024-11-26 19:50:54.743943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.468 [2024-11-26 19:50:54.743953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.468 [2024-11-26 19:50:54.743961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.468 [2024-11-26 19:50:54.743968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.468 [2024-11-26 19:50:54.743977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.468 [2024-11-26 19:50:54.743983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.468 [2024-11-26 19:50:54.743992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.468 [2024-11-26 19:50:54.743999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.468 [2024-11-26 19:50:54.744007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.468 [2024-11-26 19:50:54.744014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.468 [2024-11-26 19:50:54.744022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.468 [2024-11-26 19:50:54.744028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.468 [2024-11-26 19:50:54.744036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.468 [2024-11-26 19:50:54.744043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.468 [2024-11-26 19:50:54.744051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.468 [2024-11-26 19:50:54.744058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.468 [2024-11-26 19:50:54.744067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.468 [2024-11-26 19:50:54.744073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.468 [2024-11-26 19:50:54.744082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.468 [2024-11-26 19:50:54.744088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.468 [2024-11-26 19:50:54.744097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.468 [2024-11-26 19:50:54.744103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.468 [2024-11-26 19:50:54.744112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.468 [2024-11-26 19:50:54.744119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.468 [2024-11-26 19:50:54.744128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.468 [2024-11-26 19:50:54.744134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.468 [2024-11-26 19:50:54.744145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.468 [2024-11-26 19:50:54.744152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.468 [2024-11-26 19:50:54.744160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.468 [2024-11-26 19:50:54.744167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.468 [2024-11-26 19:50:54.744175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.468 [2024-11-26 19:50:54.744182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.468 [2024-11-26 19:50:54.744190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.468 [2024-11-26 19:50:54.744197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.468 [2024-11-26 19:50:54.744205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.468 [2024-11-26 19:50:54.744212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.468 [2024-11-26 19:50:54.744220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.468 [2024-11-26 19:50:54.744227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.468 [2024-11-26 19:50:54.744235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.468 [2024-11-26 19:50:54.744241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.468 [2024-11-26 19:50:54.744250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.468 [2024-11-26 19:50:54.744256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.468 [2024-11-26 19:50:54.744265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.468 [2024-11-26 19:50:54.744272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.468 [2024-11-26 19:50:54.744280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.468 [2024-11-26 19:50:54.744286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.468 [2024-11-26 19:50:54.744294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.468 [2024-11-26 19:50:54.744302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.468 [2024-11-26 19:50:54.744310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.468 [2024-11-26 19:50:54.744316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.468 [2024-11-26 19:50:54.744325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.468 [2024-11-26 19:50:54.744334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.468 [2024-11-26 19:50:54.744342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.468 [2024-11-26 19:50:54.744348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.468 [2024-11-26 19:50:54.744357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.468 [2024-11-26 19:50:54.744363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.468 [2024-11-26 19:50:54.744372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.468 [2024-11-26 19:50:54.744379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.468 [2024-11-26 19:50:54.744387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:23264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.468 [2024-11-26 19:50:54.744393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.468 [2024-11-26 19:50:54.744402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.468 [2024-11-26 19:50:54.744408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.468 [2024-11-26 19:50:54.744416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.469 [2024-11-26 19:50:54.744423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.469 [2024-11-26 19:50:54.744431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.469 [2024-11-26 19:50:54.744437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.469 [2024-11-26 19:50:54.744445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.469 [2024-11-26 19:50:54.744452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.469 [2024-11-26 19:50:54.744460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.469 [2024-11-26 19:50:54.744467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.469 [2024-11-26 19:50:54.744475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.469 [2024-11-26 19:50:54.744482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.469 [2024-11-26 19:50:54.744490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.469 [2024-11-26 19:50:54.744497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.469 [2024-11-26 19:50:54.744505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.469 [2024-11-26 19:50:54.744512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.469 [2024-11-26 19:50:54.744523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.469 [2024-11-26 19:50:54.744530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.469 [2024-11-26 19:50:54.744538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.469 [2024-11-26 19:50:54.744545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.469 [2024-11-26 19:50:54.744554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.469 [2024-11-26 19:50:54.744560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.469 [2024-11-26 19:50:54.744568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:23296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.469 [2024-11-26 19:50:54.744575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.469 [2024-11-26 19:50:54.744583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:23304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.469 [2024-11-26 19:50:54.744590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.469 [2024-11-26 19:50:54.744598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.469 [2024-11-26 19:50:54.744605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.469 [2024-11-26 19:50:54.744613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.469 [2024-11-26 19:50:54.744624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.469 [2024-11-26 19:50:54.744633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.469 [2024-11-26 19:50:54.744639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.469 [2024-11-26 19:50:54.744648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.469 [2024-11-26 19:50:54.744654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.469 [2024-11-26 19:50:54.744662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:23344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.469 [2024-11-26 19:50:54.744669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.469 [2024-11-26 19:50:54.744677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:23352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.469 [2024-11-26 19:50:54.744684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.469 [2024-11-26 19:50:54.744692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.469 [2024-11-26 19:50:54.744699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.469 [2024-11-26 19:50:54.744707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.469 [2024-11-26 19:50:54.744717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.469 [2024-11-26 19:50:54.744725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.469 [2024-11-26 19:50:54.744732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.469 [2024-11-26 19:50:54.744740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.469 [2024-11-26 19:50:54.744746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.469 [2024-11-26 19:50:54.744755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.469 [2024-11-26 19:50:54.744761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.469 [2024-11-26 19:50:54.744776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.469 [2024-11-26 19:50:54.744783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.469 [2024-11-26 19:50:54.744791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.469 [2024-11-26 19:50:54.744799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.469 [2024-11-26 19:50:54.744807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.469 [2024-11-26 19:50:54.744814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.469 [2024-11-26 19:50:54.744822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.469 [2024-11-26 19:50:54.744829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.469 [2024-11-26 19:50:54.744837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.469 [2024-11-26 19:50:54.744844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.469 [2024-11-26 19:50:54.744852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.469 [2024-11-26 19:50:54.744859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.469 [2024-11-26 19:50:54.744867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.469 [2024-11-26 19:50:54.744874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.469 [2024-11-26 19:50:54.744883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:22944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.469 [2024-11-26 19:50:54.744889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.469 [2024-11-26 19:50:54.744898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.469 [2024-11-26 19:50:54.744904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.469 [2024-11-26 19:50:54.744912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.469 [2024-11-26 19:50:54.744922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.469 [2024-11-26 19:50:54.744930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.469 [2024-11-26 19:50:54.744937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.470 [2024-11-26 19:50:54.744945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.470 [2024-11-26 19:50:54.744951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.470 [2024-11-26 19:50:54.744960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.470 [2024-11-26 19:50:54.744966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.470 [2024-11-26 19:50:54.744975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.470 [2024-11-26 19:50:54.744982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.470 [2024-11-26 19:50:54.744990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.470 [2024-11-26 19:50:54.744997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.470 [2024-11-26 19:50:54.745005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.470 [2024-11-26 19:50:54.745011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.470 [2024-11-26 19:50:54.745019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.470 [2024-11-26 19:50:54.745027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.470 [2024-11-26 19:50:54.745035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.470 [2024-11-26 19:50:54.745042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.470 [2024-11-26 19:50:54.745049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd2310 is same with the state(6) to be set 00:17:20.470 [2024-11-26 19:50:54.745058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:20.470 [2024-11-26 19:50:54.745063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:20.470 [2024-11-26 19:50:54.745068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23032 len:8 PRP1 0x0 PRP2 0x0 00:17:20.470 [2024-11-26 19:50:54.745075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.470 [2024-11-26 19:50:54.745083] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:20.470 [2024-11-26 19:50:54.745088] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:20.470 [2024-11-26 19:50:54.745093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:8 PRP1 0x0 PRP2 0x0 00:17:20.470 [2024-11-26 19:50:54.745100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.470 [2024-11-26 19:50:54.745109] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:20.470 [2024-11-26 19:50:54.745114] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:20.470 [2024-11-26 19:50:54.745119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23432 len:8 PRP1 0x0 PRP2 0x0 00:17:20.470 [2024-11-26 19:50:54.745126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.470 [2024-11-26 19:50:54.745133] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:20.470 [2024-11-26 19:50:54.745138] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:20.470 [2024-11-26 19:50:54.745143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23440 len:8 PRP1 0x0 PRP2 0x0 00:17:20.470 [2024-11-26 19:50:54.745149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.470 [2024-11-26 19:50:54.745156] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:20.470 [2024-11-26 19:50:54.745161] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:20.470 [2024-11-26 19:50:54.745166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23448 len:8 PRP1 0x0 PRP2 0x0 00:17:20.470 [2024-11-26 19:50:54.745173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.470 [2024-11-26 19:50:54.745180] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:20.470 [2024-11-26 19:50:54.745184] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:20.470 [2024-11-26 19:50:54.745189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:8 PRP1 0x0 PRP2 0x0 00:17:20.470 [2024-11-26 19:50:54.745196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.470 [2024-11-26 19:50:54.745202] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:20.470 [2024-11-26 19:50:54.745207] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:20.470 [2024-11-26 19:50:54.745212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23464 len:8 PRP1 0x0 PRP2 0x0 00:17:20.470 [2024-11-26 19:50:54.745218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.470 [2024-11-26 19:50:54.745225] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:20.470 [2024-11-26 19:50:54.745230] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:20.470 [2024-11-26 19:50:54.745235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23472 len:8 PRP1 0x0 PRP2 0x0 00:17:20.470 [2024-11-26 19:50:54.745242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.470 [2024-11-26 19:50:54.745248] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:20.470 [2024-11-26 19:50:54.745253] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:20.470 [2024-11-26 19:50:54.745258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23480 len:8 PRP1 0x0 PRP2 0x0 00:17:20.470 [2024-11-26 19:50:54.745265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.470 [2024-11-26 19:50:54.745272] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:20.470 [2024-11-26 19:50:54.745276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:20.470 [2024-11-26 19:50:54.745281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:8 PRP1 0x0 PRP2 0x0 00:17:20.470 [2024-11-26 19:50:54.745290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.470 [2024-11-26 19:50:54.745299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:20.470 [2024-11-26 19:50:54.745304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:20.470 [2024-11-26 19:50:54.745309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23496 len:8 PRP1 0x0 PRP2 0x0 00:17:20.470 [2024-11-26 19:50:54.745315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.470 [2024-11-26 19:50:54.745322] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:20.470 [2024-11-26 19:50:54.745327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:20.470 [2024-11-26 19:50:54.745331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23504 len:8 PRP1 0x0 PRP2 0x0 00:17:20.470 [2024-11-26 19:50:54.745338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.470 [2024-11-26 19:50:54.745345] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:20.470 [2024-11-26 19:50:54.745349] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:20.470 [2024-11-26 19:50:54.745355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23512 len:8 PRP1 0x0 PRP2 0x0 00:17:20.470 [2024-11-26 19:50:54.745361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.470 [2024-11-26 19:50:54.745368] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:20.470 [2024-11-26 19:50:54.745372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:20.470 [2024-11-26 19:50:54.745377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:8 PRP1 0x0 PRP2 0x0 00:17:20.470 [2024-11-26 19:50:54.745384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.470 [2024-11-26 19:50:54.745391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:20.470 [2024-11-26 19:50:54.745395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:20.470 [2024-11-26 19:50:54.745400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23528 len:8 PRP1 0x0 PRP2 0x0 00:17:20.470 [2024-11-26 19:50:54.745407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.470 [2024-11-26 19:50:54.745415] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:20.470 [2024-11-26 19:50:54.745419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:20.470 [2024-11-26 19:50:54.745424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23536 len:8 PRP1 0x0 PRP2 0x0 00:17:20.470 [2024-11-26 19:50:54.745430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.470 [2024-11-26 19:50:54.745437] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:20.470 [2024-11-26 19:50:54.745442] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:20.470 [2024-11-26 19:50:54.745447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23544 len:8 PRP1 0x0 PRP2 0x0 00:17:20.470 [2024-11-26 19:50:54.745454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.470 [2024-11-26 19:50:54.745545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.470 [2024-11-26 19:50:54.745557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.470 [2024-11-26 19:50:54.745572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.470 [2024-11-26 19:50:54.745579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.470 [2024-11-26 19:50:54.745588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.470 [2024-11-26 19:50:54.745594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.471 [2024-11-26 19:50:54.745602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:20.471 [2024-11-26 19:50:54.745608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.471 [2024-11-26 19:50:54.745616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:20.471 [2024-11-26 19:50:54.745623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:20.471 [2024-11-26 19:50:54.745634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc431e0 is same with the state(6) to be set 00:17:20.471 [2024-11-26 19:50:54.746476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:17:20.471 [2024-11-26 19:50:54.746497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc431e0 (9): Bad file descriptor 00:17:20.471 [2024-11-26 19:50:54.746777] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.471 [2024-11-26 19:50:54.746796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc431e0 with addr=10.0.0.3, port=4421 00:17:20.471 [2024-11-26 19:50:54.746804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc431e0 is same with the state(6) to be set 00:17:20.471 [2024-11-26 19:50:54.746835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc431e0 (9): Bad file descriptor 00:17:20.471 [2024-11-26 19:50:54.746852] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:17:20.471 [2024-11-26 19:50:54.746859] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:17:20.471 [2024-11-26 19:50:54.746867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:17:20.471 [2024-11-26 19:50:54.746874] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:17:20.471 [2024-11-26 19:50:54.746882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:17:20.471 9600.54 IOPS, 37.50 MiB/s [2024-11-26T19:51:15.718Z] 9676.75 IOPS, 37.80 MiB/s [2024-11-26T19:51:15.718Z] 9754.24 IOPS, 38.10 MiB/s [2024-11-26T19:51:15.718Z] 9826.18 IOPS, 38.38 MiB/s [2024-11-26T19:51:15.718Z] 9894.64 IOPS, 38.65 MiB/s [2024-11-26T19:51:15.718Z] 9958.08 IOPS, 38.90 MiB/s [2024-11-26T19:51:15.718Z] 10018.80 IOPS, 39.14 MiB/s [2024-11-26T19:51:15.718Z] 10065.60 IOPS, 39.32 MiB/s [2024-11-26T19:51:15.718Z] 10119.88 IOPS, 39.53 MiB/s [2024-11-26T19:51:15.718Z] 10174.61 IOPS, 39.74 MiB/s [2024-11-26T19:51:15.718Z] [2024-11-26 19:51:04.794352] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:17:20.471 10233.60 IOPS, 39.98 MiB/s [2024-11-26T19:51:15.718Z] 10294.35 IOPS, 40.21 MiB/s [2024-11-26T19:51:15.718Z] 10353.36 IOPS, 40.44 MiB/s [2024-11-26T19:51:15.718Z] 10412.33 IOPS, 40.67 MiB/s [2024-11-26T19:51:15.718Z] 10459.76 IOPS, 40.86 MiB/s [2024-11-26T19:51:15.718Z] 10512.00 IOPS, 41.06 MiB/s [2024-11-26T19:51:15.718Z] 10562.20 IOPS, 41.26 MiB/s [2024-11-26T19:51:15.718Z] 10609.23 IOPS, 41.44 MiB/s [2024-11-26T19:51:15.718Z] 10657.51 IOPS, 41.63 MiB/s [2024-11-26T19:51:15.718Z] 10702.07 IOPS, 41.80 MiB/s [2024-11-26T19:51:15.718Z] Received shutdown signal, test time was about 54.498676 seconds 00:17:20.471 00:17:20.471 Latency(us) 00:17:20.471 [2024-11-26T19:51:15.718Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:20.471 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:20.471 Verification LBA range: start 0x0 length 0x4000 00:17:20.471 Nvme0n1 : 54.50 10721.71 41.88 0.00 0.00 11915.76 368.64 7020619.62 00:17:20.471 [2024-11-26T19:51:15.718Z] =================================================================================================================== 00:17:20.471 [2024-11-26T19:51:15.718Z] Total : 10721.71 41.88 0.00 0.00 11915.76 368.64 7020619.62 00:17:20.471 19:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:20.471 rmmod nvme_tcp 00:17:20.471 rmmod nvme_fabrics 00:17:20.471 rmmod nvme_keyring 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 79304 ']' 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 79304 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 79304 ']' 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 79304 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79304 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:20.471 killing process with pid 79304 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79304' 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 79304 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 79304 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:17:20.471 00:17:20.471 real 0m59.532s 00:17:20.471 user 2m47.958s 00:17:20.471 sys 0m13.777s 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:20.471 ************************************ 00:17:20.471 END TEST nvmf_host_multipath 00:17:20.471 ************************************ 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.471 ************************************ 00:17:20.471 START TEST nvmf_timeout 00:17:20.471 ************************************ 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:17:20.471 * Looking for test storage... 00:17:20.471 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:20.471 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:20.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:20.472 --rc genhtml_branch_coverage=1 00:17:20.472 --rc genhtml_function_coverage=1 00:17:20.472 --rc genhtml_legend=1 00:17:20.472 --rc geninfo_all_blocks=1 00:17:20.472 --rc geninfo_unexecuted_blocks=1 00:17:20.472 00:17:20.472 ' 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:20.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:20.472 --rc genhtml_branch_coverage=1 00:17:20.472 --rc genhtml_function_coverage=1 00:17:20.472 --rc genhtml_legend=1 00:17:20.472 --rc geninfo_all_blocks=1 00:17:20.472 --rc geninfo_unexecuted_blocks=1 00:17:20.472 00:17:20.472 ' 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:20.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:20.472 --rc genhtml_branch_coverage=1 00:17:20.472 --rc genhtml_function_coverage=1 00:17:20.472 --rc genhtml_legend=1 00:17:20.472 --rc geninfo_all_blocks=1 00:17:20.472 --rc geninfo_unexecuted_blocks=1 00:17:20.472 00:17:20.472 ' 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:20.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:20.472 --rc genhtml_branch_coverage=1 00:17:20.472 --rc genhtml_function_coverage=1 00:17:20.472 --rc genhtml_legend=1 00:17:20.472 --rc geninfo_all_blocks=1 00:17:20.472 --rc geninfo_unexecuted_blocks=1 00:17:20.472 00:17:20.472 ' 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=91838eb1-5852-43eb-90b2-09876f360ab2 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:20.472 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:20.472 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:20.473 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:20.473 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:20.473 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:20.473 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:20.473 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:20.473 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:20.473 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:20.473 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:20.473 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:20.473 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:20.473 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:20.473 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:20.473 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:20.473 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:20.473 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:20.473 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:20.473 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:20.473 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:20.473 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:20.473 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:20.473 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:20.473 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:20.473 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:20.473 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:20.473 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:20.473 Cannot find device "nvmf_init_br" 00:17:20.473 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:17:20.473 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:20.473 Cannot find device "nvmf_init_br2" 00:17:20.473 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:17:20.473 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:20.473 Cannot find device "nvmf_tgt_br" 00:17:20.473 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:17:20.473 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:20.732 Cannot find device "nvmf_tgt_br2" 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:20.732 Cannot find device "nvmf_init_br" 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:20.732 Cannot find device "nvmf_init_br2" 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:20.732 Cannot find device "nvmf_tgt_br" 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:20.732 Cannot find device "nvmf_tgt_br2" 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:20.732 Cannot find device "nvmf_br" 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:20.732 Cannot find device "nvmf_init_if" 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:20.732 Cannot find device "nvmf_init_if2" 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:20.732 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:20.732 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:20.732 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:20.732 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.115 ms 00:17:20.732 00:17:20.732 --- 10.0.0.3 ping statistics --- 00:17:20.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.732 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:20.732 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:20.732 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:17:20.732 00:17:20.732 --- 10.0.0.4 ping statistics --- 00:17:20.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.732 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:20.732 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:20.732 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:17:20.732 00:17:20.732 --- 10.0.0.1 ping statistics --- 00:17:20.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.732 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:20.732 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:20.732 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.034 ms 00:17:20.732 00:17:20.732 --- 10.0.0.2 ping statistics --- 00:17:20.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.732 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:20.732 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:20.991 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:17:20.991 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:20.991 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:20.991 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:20.991 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=80530 00:17:20.991 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 80530 00:17:20.991 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 80530 ']' 00:17:20.991 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.991 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:20.991 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:20.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:20.991 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.991 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:20.991 19:51:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:20.991 [2024-11-26 19:51:16.022301] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:17:20.991 [2024-11-26 19:51:16.022347] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:20.991 [2024-11-26 19:51:16.153194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:20.991 [2024-11-26 19:51:16.187321] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:20.991 [2024-11-26 19:51:16.187358] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:20.991 [2024-11-26 19:51:16.187365] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:20.991 [2024-11-26 19:51:16.187370] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:20.991 [2024-11-26 19:51:16.187374] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:20.991 [2024-11-26 19:51:16.188038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:20.991 [2024-11-26 19:51:16.188339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.991 [2024-11-26 19:51:16.217833] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:21.926 19:51:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:21.926 19:51:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:17:21.926 19:51:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:21.926 19:51:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:21.926 19:51:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:21.926 19:51:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:21.926 19:51:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:21.926 19:51:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:21.926 [2024-11-26 19:51:17.105790] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:21.926 19:51:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:22.184 Malloc0 00:17:22.184 19:51:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:22.441 19:51:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:22.441 19:51:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:22.700 [2024-11-26 19:51:17.806416] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:22.700 19:51:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=80573 00:17:22.700 19:51:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:17:22.700 19:51:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 80573 /var/tmp/bdevperf.sock 00:17:22.700 19:51:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 80573 ']' 00:17:22.700 19:51:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:22.700 19:51:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:22.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:22.700 19:51:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:22.700 19:51:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:22.700 19:51:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:22.700 [2024-11-26 19:51:17.861078] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:17:22.700 [2024-11-26 19:51:17.861134] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80573 ] 00:17:22.958 [2024-11-26 19:51:17.997420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.958 [2024-11-26 19:51:18.028677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:22.958 [2024-11-26 19:51:18.056172] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:23.575 19:51:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:23.575 19:51:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:17:23.575 19:51:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:23.833 19:51:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:17:24.091 NVMe0n1 00:17:24.091 19:51:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=80597 00:17:24.091 19:51:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:17:24.091 19:51:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:24.091 Running I/O for 10 seconds... 00:17:25.025 19:51:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:25.286 9285.00 IOPS, 36.27 MiB/s [2024-11-26T19:51:20.533Z] [2024-11-26 19:51:20.389101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:25.286 [2024-11-26 19:51:20.389147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.286 [2024-11-26 19:51:20.389154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:25.286 [2024-11-26 19:51:20.389159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.286 [2024-11-26 19:51:20.389164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:25.286 [2024-11-26 19:51:20.389169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.286 [2024-11-26 19:51:20.389174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:25.286 [2024-11-26 19:51:20.389178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.286 [2024-11-26 19:51:20.389183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ae50 is same with the state(6) to be set 00:17:25.286 [2024-11-26 19:51:20.389351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:82472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.286 [2024-11-26 19:51:20.389359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.286 [2024-11-26 19:51:20.389372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:82600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.286 [2024-11-26 19:51:20.389376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.286 [2024-11-26 19:51:20.389382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.286 [2024-11-26 19:51:20.389387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.286 [2024-11-26 19:51:20.389393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.286 [2024-11-26 19:51:20.389397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.286 [2024-11-26 19:51:20.389403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:82624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.286 [2024-11-26 19:51:20.389407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.286 [2024-11-26 19:51:20.389413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:82632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.286 [2024-11-26 19:51:20.389417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.286 [2024-11-26 19:51:20.389423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:82640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.286 [2024-11-26 19:51:20.389427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.286 [2024-11-26 19:51:20.389433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.286 [2024-11-26 19:51:20.389437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.286 [2024-11-26 19:51:20.389443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:82656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.286 [2024-11-26 19:51:20.389447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.286 [2024-11-26 19:51:20.389452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:82664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.286 [2024-11-26 19:51:20.389457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.286 [2024-11-26 19:51:20.389463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:82672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.286 [2024-11-26 19:51:20.389467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.286 [2024-11-26 19:51:20.389472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:82680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.286 [2024-11-26 19:51:20.389477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.286 [2024-11-26 19:51:20.389483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.286 [2024-11-26 19:51:20.389489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.286 [2024-11-26 19:51:20.389501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:82696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.286 [2024-11-26 19:51:20.389505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.286 [2024-11-26 19:51:20.389511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.286 [2024-11-26 19:51:20.389515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.286 [2024-11-26 19:51:20.389521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:82712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.286 [2024-11-26 19:51:20.389526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.286 [2024-11-26 19:51:20.389532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.286 [2024-11-26 19:51:20.389536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.286 [2024-11-26 19:51:20.389542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:82728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.286 [2024-11-26 19:51:20.389546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.286 [2024-11-26 19:51:20.389552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:82736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.286 [2024-11-26 19:51:20.389556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.286 [2024-11-26 19:51:20.389562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:82744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.286 [2024-11-26 19:51:20.389566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.286 [2024-11-26 19:51:20.389571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:82752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.286 [2024-11-26 19:51:20.389576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.286 [2024-11-26 19:51:20.389582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:82760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.286 [2024-11-26 19:51:20.389586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.286 [2024-11-26 19:51:20.389593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:82768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.286 [2024-11-26 19:51:20.389603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.286 [2024-11-26 19:51:20.389609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.286 [2024-11-26 19:51:20.389613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.286 [2024-11-26 19:51:20.389619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:82784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.286 [2024-11-26 19:51:20.389623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.286 [2024-11-26 19:51:20.389629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:82792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.286 [2024-11-26 19:51:20.389633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.286 [2024-11-26 19:51:20.389639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:82800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.286 [2024-11-26 19:51:20.389643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.286 [2024-11-26 19:51:20.389649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:82808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.287 [2024-11-26 19:51:20.389653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.287 [2024-11-26 19:51:20.389659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:82816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.287 [2024-11-26 19:51:20.389663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.287 [2024-11-26 19:51:20.389669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.287 [2024-11-26 19:51:20.389673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.287 [2024-11-26 19:51:20.389679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.287 [2024-11-26 19:51:20.389683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.287 [2024-11-26 19:51:20.389688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:82840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.287 [2024-11-26 19:51:20.389693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.287 [2024-11-26 19:51:20.389698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:82848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.287 [2024-11-26 19:51:20.389702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.287 [2024-11-26 19:51:20.389708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:82856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.287 [2024-11-26 19:51:20.389712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.287 [2024-11-26 19:51:20.389718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.287 [2024-11-26 19:51:20.389721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.287 [2024-11-26 19:51:20.389727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.287 [2024-11-26 19:51:20.389731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.287 [2024-11-26 19:51:20.389737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:82880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.287 [2024-11-26 19:51:20.389741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.287 [2024-11-26 19:51:20.389747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.287 [2024-11-26 19:51:20.389752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.287 [2024-11-26 19:51:20.389757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:82896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.287 [2024-11-26 19:51:20.389761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.287 [2024-11-26 19:51:20.389776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.287 [2024-11-26 19:51:20.389782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.287 [2024-11-26 19:51:20.389788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:82912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.287 [2024-11-26 19:51:20.389792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.287 [2024-11-26 19:51:20.389798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.287 [2024-11-26 19:51:20.389802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.287 [2024-11-26 19:51:20.389808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.287 [2024-11-26 19:51:20.389813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.287 [2024-11-26 19:51:20.389818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:82936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.287 [2024-11-26 19:51:20.389822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.287 [2024-11-26 19:51:20.389829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:82944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.287 [2024-11-26 19:51:20.389835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.287 [2024-11-26 19:51:20.389841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.287 [2024-11-26 19:51:20.389845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.287 [2024-11-26 19:51:20.389851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:82960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.287 [2024-11-26 19:51:20.389855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.287 [2024-11-26 19:51:20.389861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:82968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.287 [2024-11-26 19:51:20.389865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.287 [2024-11-26 19:51:20.389871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:82976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.287 [2024-11-26 19:51:20.389875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.287 [2024-11-26 19:51:20.389880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.287 [2024-11-26 19:51:20.389884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.287 [2024-11-26 19:51:20.389890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.287 [2024-11-26 19:51:20.389894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.287 [2024-11-26 19:51:20.389900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:83000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.287 [2024-11-26 19:51:20.389904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.287 [2024-11-26 19:51:20.389909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.287 [2024-11-26 19:51:20.389913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.287 [2024-11-26 19:51:20.389919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:83016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.287 [2024-11-26 19:51:20.389923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.287 [2024-11-26 19:51:20.389929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:83024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.287 [2024-11-26 19:51:20.389933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.287 [2024-11-26 19:51:20.389939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:83032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.287 [2024-11-26 19:51:20.389943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.287 [2024-11-26 19:51:20.389949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:83040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.287 [2024-11-26 19:51:20.389954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.287 [2024-11-26 19:51:20.389959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:83048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.287 [2024-11-26 19:51:20.389963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.287 [2024-11-26 19:51:20.389969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:83056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.287 [2024-11-26 19:51:20.389974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.287 [2024-11-26 19:51:20.389979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:83064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.287 [2024-11-26 19:51:20.389984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.287 [2024-11-26 19:51:20.389989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:83072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.287 [2024-11-26 19:51:20.389993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.287 [2024-11-26 19:51:20.389999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:83080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.287 [2024-11-26 19:51:20.390009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.287 [2024-11-26 19:51:20.390015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:83088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.287 [2024-11-26 19:51:20.390019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.287 [2024-11-26 19:51:20.390024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:83096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.288 [2024-11-26 19:51:20.390028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.288 [2024-11-26 19:51:20.390034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:83104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.288 [2024-11-26 19:51:20.390038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.288 [2024-11-26 19:51:20.390043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:83112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.288 [2024-11-26 19:51:20.390047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.288 [2024-11-26 19:51:20.390053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:83120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.288 [2024-11-26 19:51:20.390057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.288 [2024-11-26 19:51:20.390063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:83128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.288 [2024-11-26 19:51:20.390067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.288 [2024-11-26 19:51:20.390074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:83136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.288 [2024-11-26 19:51:20.390078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.288 [2024-11-26 19:51:20.390084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:83144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.288 [2024-11-26 19:51:20.390088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.288 [2024-11-26 19:51:20.390094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:83152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.288 [2024-11-26 19:51:20.390098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.288 [2024-11-26 19:51:20.390103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:83160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.288 [2024-11-26 19:51:20.390107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.288 [2024-11-26 19:51:20.390113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:83168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.288 [2024-11-26 19:51:20.390117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.288 [2024-11-26 19:51:20.390122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:83176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.288 [2024-11-26 19:51:20.390126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.288 [2024-11-26 19:51:20.390133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:83184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.288 [2024-11-26 19:51:20.390137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.288 [2024-11-26 19:51:20.390142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:83192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.288 [2024-11-26 19:51:20.390146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.288 [2024-11-26 19:51:20.390152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:83200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.288 [2024-11-26 19:51:20.390156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.288 [2024-11-26 19:51:20.390162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:83208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.288 [2024-11-26 19:51:20.390167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.288 [2024-11-26 19:51:20.390173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:83216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.288 [2024-11-26 19:51:20.390177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.288 [2024-11-26 19:51:20.390183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:83224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.288 [2024-11-26 19:51:20.390186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.288 [2024-11-26 19:51:20.390192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:83232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.288 [2024-11-26 19:51:20.390196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.288 [2024-11-26 19:51:20.390202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:83240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.288 [2024-11-26 19:51:20.390206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.288 [2024-11-26 19:51:20.390212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:83248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.288 [2024-11-26 19:51:20.390216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.288 [2024-11-26 19:51:20.390221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:83256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.288 [2024-11-26 19:51:20.390226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.288 [2024-11-26 19:51:20.390232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:83264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.288 [2024-11-26 19:51:20.390235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.288 [2024-11-26 19:51:20.390242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:83272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.288 [2024-11-26 19:51:20.390246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.288 [2024-11-26 19:51:20.390252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:83280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.288 [2024-11-26 19:51:20.390256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.288 [2024-11-26 19:51:20.390261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:83288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.288 [2024-11-26 19:51:20.390266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.288 [2024-11-26 19:51:20.390271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:83296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.288 [2024-11-26 19:51:20.390275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.288 [2024-11-26 19:51:20.390281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:83304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.288 [2024-11-26 19:51:20.390285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.288 [2024-11-26 19:51:20.390294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:83312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.288 [2024-11-26 19:51:20.390298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.288 [2024-11-26 19:51:20.390304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:83320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.288 [2024-11-26 19:51:20.390309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.288 [2024-11-26 19:51:20.390314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:83328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.288 [2024-11-26 19:51:20.390319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.288 [2024-11-26 19:51:20.390324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:83336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.288 [2024-11-26 19:51:20.390329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.288 [2024-11-26 19:51:20.390335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:83344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.288 [2024-11-26 19:51:20.390339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.288 [2024-11-26 19:51:20.390345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:83352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.288 [2024-11-26 19:51:20.390349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.288 [2024-11-26 19:51:20.390355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:83360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.288 [2024-11-26 19:51:20.390359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.288 [2024-11-26 19:51:20.390364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.288 [2024-11-26 19:51:20.390369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.288 [2024-11-26 19:51:20.390374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:83376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.288 [2024-11-26 19:51:20.390379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.288 [2024-11-26 19:51:20.390384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:83384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.288 [2024-11-26 19:51:20.390388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.288 [2024-11-26 19:51:20.390394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:83392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.288 [2024-11-26 19:51:20.390398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.288 [2024-11-26 19:51:20.390403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:83400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.288 [2024-11-26 19:51:20.390408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.289 [2024-11-26 19:51:20.390413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:83408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.289 [2024-11-26 19:51:20.390417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.289 [2024-11-26 19:51:20.390423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:83416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.289 [2024-11-26 19:51:20.390426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.289 [2024-11-26 19:51:20.390433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:83424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.289 [2024-11-26 19:51:20.390437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.289 [2024-11-26 19:51:20.390443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:83432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.289 [2024-11-26 19:51:20.390447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.289 [2024-11-26 19:51:20.390453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:83440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.289 [2024-11-26 19:51:20.390458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.289 [2024-11-26 19:51:20.390463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:83448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.289 [2024-11-26 19:51:20.390467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.289 [2024-11-26 19:51:20.390473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:83456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.289 [2024-11-26 19:51:20.390477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.289 [2024-11-26 19:51:20.390483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.289 [2024-11-26 19:51:20.390488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.289 [2024-11-26 19:51:20.390493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.289 [2024-11-26 19:51:20.390498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.289 [2024-11-26 19:51:20.390504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:82480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.289 [2024-11-26 19:51:20.390508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.289 [2024-11-26 19:51:20.390513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:82488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.289 [2024-11-26 19:51:20.390517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.289 [2024-11-26 19:51:20.390523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:82496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.289 [2024-11-26 19:51:20.390527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.289 [2024-11-26 19:51:20.390533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:82504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.289 [2024-11-26 19:51:20.390536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.289 [2024-11-26 19:51:20.390543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:82512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.289 [2024-11-26 19:51:20.390547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.289 [2024-11-26 19:51:20.390553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:82520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.289 [2024-11-26 19:51:20.390557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.289 [2024-11-26 19:51:20.390563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:82528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.289 [2024-11-26 19:51:20.390567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.289 [2024-11-26 19:51:20.390573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:82536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.289 [2024-11-26 19:51:20.390577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.289 [2024-11-26 19:51:20.390582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:82544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.289 [2024-11-26 19:51:20.390586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.289 [2024-11-26 19:51:20.390592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:82552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.289 [2024-11-26 19:51:20.390596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.289 [2024-11-26 19:51:20.390602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:82560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.289 [2024-11-26 19:51:20.390606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.289 [2024-11-26 19:51:20.390613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.289 [2024-11-26 19:51:20.390617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.289 [2024-11-26 19:51:20.390623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:82576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.289 [2024-11-26 19:51:20.390627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.289 [2024-11-26 19:51:20.390633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:82584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.289 [2024-11-26 19:51:20.390636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.289 [2024-11-26 19:51:20.390642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:82592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.289 [2024-11-26 19:51:20.390648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.289 [2024-11-26 19:51:20.390653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:83480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:25.289 [2024-11-26 19:51:20.390658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.289 [2024-11-26 19:51:20.390663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166aa50 is same with the state(6) to be set 00:17:25.289 [2024-11-26 19:51:20.390668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:25.289 [2024-11-26 19:51:20.390672] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:25.289 [2024-11-26 19:51:20.390676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83488 len:8 PRP1 0x0 PRP2 0x0 00:17:25.289 [2024-11-26 19:51:20.390680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:25.289 [2024-11-26 19:51:20.390894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:25.289 [2024-11-26 19:51:20.390907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x160ae50 (9): Bad file descriptor 00:17:25.289 [2024-11-26 19:51:20.390963] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:25.289 [2024-11-26 19:51:20.390973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160ae50 with addr=10.0.0.3, port=4420 00:17:25.289 [2024-11-26 19:51:20.390978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ae50 is same with the state(6) to be set 00:17:25.289 [2024-11-26 19:51:20.390986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x160ae50 (9): Bad file descriptor 00:17:25.289 [2024-11-26 19:51:20.390994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:17:25.289 [2024-11-26 19:51:20.390998] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:17:25.289 [2024-11-26 19:51:20.391003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:17:25.289 [2024-11-26 19:51:20.391008] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:17:25.289 [2024-11-26 19:51:20.391013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:25.289 19:51:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:17:27.155 5154.50 IOPS, 20.13 MiB/s [2024-11-26T19:51:22.402Z] 3436.33 IOPS, 13.42 MiB/s [2024-11-26T19:51:22.402Z] [2024-11-26 19:51:22.391108] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:27.155 [2024-11-26 19:51:22.391149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160ae50 with addr=10.0.0.3, port=4420 00:17:27.155 [2024-11-26 19:51:22.391157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ae50 is same with the state(6) to be set 00:17:27.155 [2024-11-26 19:51:22.391330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x160ae50 (9): Bad file descriptor 00:17:27.155 [2024-11-26 19:51:22.391341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:17:27.155 [2024-11-26 19:51:22.391346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:17:27.155 [2024-11-26 19:51:22.391352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:17:27.155 [2024-11-26 19:51:22.391358] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:17:27.155 [2024-11-26 19:51:22.391363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:27.413 19:51:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:17:27.413 19:51:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:27.413 19:51:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:17:27.413 19:51:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:17:27.413 19:51:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:17:27.413 19:51:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:17:27.413 19:51:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:17:27.765 19:51:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:17:27.765 19:51:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:17:29.269 2577.25 IOPS, 10.07 MiB/s [2024-11-26T19:51:24.516Z] 2061.80 IOPS, 8.05 MiB/s [2024-11-26T19:51:24.516Z] [2024-11-26 19:51:24.391510] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:29.269 [2024-11-26 19:51:24.391568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160ae50 with addr=10.0.0.3, port=4420 00:17:29.269 [2024-11-26 19:51:24.391581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ae50 is same with the state(6) to be set 00:17:29.269 [2024-11-26 19:51:24.391800] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x160ae50 (9): Bad file descriptor 00:17:29.269 [2024-11-26 19:51:24.391817] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:17:29.269 [2024-11-26 19:51:24.391824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:17:29.269 [2024-11-26 19:51:24.391832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:17:29.269 [2024-11-26 19:51:24.391841] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:17:29.269 [2024-11-26 19:51:24.391850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:31.133 1718.17 IOPS, 6.71 MiB/s [2024-11-26T19:51:26.639Z] 1472.71 IOPS, 5.75 MiB/s [2024-11-26T19:51:26.639Z] [2024-11-26 19:51:26.391898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:17:31.392 [2024-11-26 19:51:26.391947] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:17:31.392 [2024-11-26 19:51:26.391953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:17:31.392 [2024-11-26 19:51:26.391959] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:17:31.392 [2024-11-26 19:51:26.391966] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:17:31.392 00:17:31.392 Latency(us) 00:17:31.392 [2024-11-26T19:51:26.639Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:31.392 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:31.392 Verification LBA range: start 0x0 length 0x4000 00:17:31.392 NVMe0n1 : 7.11 1449.21 5.66 53.98 0.00 85058.14 264.66 6013986.66 00:17:31.392 [2024-11-26T19:51:26.639Z] =================================================================================================================== 00:17:31.392 [2024-11-26T19:51:26.639Z] Total : 1449.21 5.66 53.98 0.00 85058.14 264.66 6013986.66 00:17:31.392 { 00:17:31.392 "results": [ 00:17:31.392 { 00:17:31.392 "job": "NVMe0n1", 00:17:31.392 "core_mask": "0x4", 00:17:31.392 "workload": "verify", 00:17:31.392 "status": "finished", 00:17:31.392 "verify_range": { 00:17:31.392 "start": 0, 00:17:31.392 "length": 16384 00:17:31.392 }, 00:17:31.392 "queue_depth": 128, 00:17:31.392 "io_size": 4096, 00:17:31.392 "runtime": 7.113524, 00:17:31.392 "iops": 1449.2113894604138, 00:17:31.392 "mibps": 5.660981990079741, 00:17:31.392 "io_failed": 384, 00:17:31.392 "io_timeout": 0, 00:17:31.392 "avg_latency_us": 85058.14162363588, 00:17:31.392 "min_latency_us": 264.6646153846154, 00:17:31.392 "max_latency_us": 6013986.658461538 00:17:31.392 } 00:17:31.392 ], 00:17:31.392 "core_count": 1 00:17:31.392 } 00:17:32.865 19:51:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:17:32.865 19:51:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:32.865 19:51:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:17:32.865 19:51:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:17:32.865 19:51:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:17:32.865 19:51:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:17:32.865 19:51:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:17:33.123 19:51:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:17:33.123 19:51:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 80597 00:17:33.123 19:51:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 80573 00:17:33.123 19:51:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 80573 ']' 00:17:33.123 19:51:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 80573 00:17:33.123 19:51:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:17:33.123 19:51:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:33.123 19:51:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80573 00:17:33.123 19:51:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:33.123 19:51:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:33.123 19:51:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80573' 00:17:33.123 killing process with pid 80573 00:17:33.123 Received shutdown signal, test time was about 8.933960 seconds 00:17:33.123 00:17:33.123 Latency(us) 00:17:33.123 [2024-11-26T19:51:28.370Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:33.123 [2024-11-26T19:51:28.370Z] =================================================================================================================== 00:17:33.123 [2024-11-26T19:51:28.370Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:33.123 19:51:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 80573 00:17:33.123 19:51:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 80573 00:17:33.123 19:51:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:33.381 [2024-11-26 19:51:28.493152] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:33.381 19:51:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=80719 00:17:33.381 19:51:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 80719 /var/tmp/bdevperf.sock 00:17:33.381 19:51:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 80719 ']' 00:17:33.381 19:51:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:33.381 19:51:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:17:33.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:33.381 19:51:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:33.381 19:51:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:33.381 19:51:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:33.381 19:51:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:33.381 [2024-11-26 19:51:28.536434] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:17:33.381 [2024-11-26 19:51:28.536498] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80719 ] 00:17:33.640 [2024-11-26 19:51:28.664557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.640 [2024-11-26 19:51:28.703276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:33.640 [2024-11-26 19:51:28.741744] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:34.205 19:51:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:34.205 19:51:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:17:34.205 19:51:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:34.463 19:51:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:17:34.722 NVMe0n1 00:17:34.722 19:51:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=80737 00:17:34.722 19:51:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:17:34.722 19:51:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:34.979 Running I/O for 10 seconds... 00:17:35.914 19:51:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:35.914 12423.00 IOPS, 48.53 MiB/s [2024-11-26T19:51:31.161Z] [2024-11-26 19:51:31.093019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:111608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.914 [2024-11-26 19:51:31.093071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.914 [2024-11-26 19:51:31.093087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:111616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.914 [2024-11-26 19:51:31.093092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.914 [2024-11-26 19:51:31.093099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:111624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.914 [2024-11-26 19:51:31.093105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.914 [2024-11-26 19:51:31.093111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:111632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.914 [2024-11-26 19:51:31.093116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.914 [2024-11-26 19:51:31.093122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:111640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.914 [2024-11-26 19:51:31.093127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.914 [2024-11-26 19:51:31.093133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:111648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.914 [2024-11-26 19:51:31.093137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.914 [2024-11-26 19:51:31.093143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:111656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.914 [2024-11-26 19:51:31.093148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.914 [2024-11-26 19:51:31.093154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:111664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.914 [2024-11-26 19:51:31.093158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.914 [2024-11-26 19:51:31.093164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:111160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.914 [2024-11-26 19:51:31.093168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.914 [2024-11-26 19:51:31.093174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.914 [2024-11-26 19:51:31.093179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.914 [2024-11-26 19:51:31.093185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:111176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.914 [2024-11-26 19:51:31.093189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.914 [2024-11-26 19:51:31.093195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:111184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.914 [2024-11-26 19:51:31.093199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.914 [2024-11-26 19:51:31.093205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:111192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.915 [2024-11-26 19:51:31.093209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.915 [2024-11-26 19:51:31.093215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:111200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.915 [2024-11-26 19:51:31.093219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.915 [2024-11-26 19:51:31.093225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:111208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.915 [2024-11-26 19:51:31.093230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.915 [2024-11-26 19:51:31.093236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:111216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.915 [2024-11-26 19:51:31.093240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.915 [2024-11-26 19:51:31.093246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:111672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.915 [2024-11-26 19:51:31.093254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.915 [2024-11-26 19:51:31.093260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:111680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.915 [2024-11-26 19:51:31.093265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.915 [2024-11-26 19:51:31.093271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:111688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.915 [2024-11-26 19:51:31.093276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.915 [2024-11-26 19:51:31.093281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:111696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.915 [2024-11-26 19:51:31.093286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.915 [2024-11-26 19:51:31.093292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:111704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.915 [2024-11-26 19:51:31.093297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.915 [2024-11-26 19:51:31.093303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:111712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.915 [2024-11-26 19:51:31.093307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.915 [2024-11-26 19:51:31.093312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:111720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.915 [2024-11-26 19:51:31.093317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.915 [2024-11-26 19:51:31.093322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:111728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.915 [2024-11-26 19:51:31.093328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.915 [2024-11-26 19:51:31.093333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:111736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.915 [2024-11-26 19:51:31.093338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.915 [2024-11-26 19:51:31.093344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:111744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.915 [2024-11-26 19:51:31.093349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.915 [2024-11-26 19:51:31.093355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:111752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.915 [2024-11-26 19:51:31.093359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.915 [2024-11-26 19:51:31.093365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.915 [2024-11-26 19:51:31.093369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.915 [2024-11-26 19:51:31.093376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:111768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.915 [2024-11-26 19:51:31.093381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.915 [2024-11-26 19:51:31.093386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:111776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.915 [2024-11-26 19:51:31.093391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.915 [2024-11-26 19:51:31.093396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:111784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.915 [2024-11-26 19:51:31.093401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.915 [2024-11-26 19:51:31.093408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:111792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.915 [2024-11-26 19:51:31.093412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.915 [2024-11-26 19:51:31.093419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:111224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.915 [2024-11-26 19:51:31.093426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.915 [2024-11-26 19:51:31.093432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:111232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.915 [2024-11-26 19:51:31.093437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.915 [2024-11-26 19:51:31.093444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.915 [2024-11-26 19:51:31.093448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.915 [2024-11-26 19:51:31.093454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:111248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.915 [2024-11-26 19:51:31.093458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.915 [2024-11-26 19:51:31.093465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:111256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.915 [2024-11-26 19:51:31.093470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.915 [2024-11-26 19:51:31.093476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:111264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.915 [2024-11-26 19:51:31.093482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.915 [2024-11-26 19:51:31.093488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:111272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.915 [2024-11-26 19:51:31.093492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.915 [2024-11-26 19:51:31.093499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:111280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.915 [2024-11-26 19:51:31.093504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.915 [2024-11-26 19:51:31.093510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:111800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.915 [2024-11-26 19:51:31.093515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.915 [2024-11-26 19:51:31.093521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:111808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.915 [2024-11-26 19:51:31.093526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.916 [2024-11-26 19:51:31.093534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:111816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.916 [2024-11-26 19:51:31.093539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.916 [2024-11-26 19:51:31.093545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:111824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.916 [2024-11-26 19:51:31.093550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.916 [2024-11-26 19:51:31.093557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:111832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.916 [2024-11-26 19:51:31.093562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.916 [2024-11-26 19:51:31.093568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:111840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.916 [2024-11-26 19:51:31.093573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.916 [2024-11-26 19:51:31.093579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:111848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.916 [2024-11-26 19:51:31.093584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.916 [2024-11-26 19:51:31.093590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:111856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.916 [2024-11-26 19:51:31.093595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.916 [2024-11-26 19:51:31.093601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:111864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.916 [2024-11-26 19:51:31.093607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.916 [2024-11-26 19:51:31.093613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:111872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.916 [2024-11-26 19:51:31.093618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.916 [2024-11-26 19:51:31.093624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.916 [2024-11-26 19:51:31.093628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.916 [2024-11-26 19:51:31.093634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:111888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.916 [2024-11-26 19:51:31.093638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.916 [2024-11-26 19:51:31.093644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:111896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.916 [2024-11-26 19:51:31.093649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.916 [2024-11-26 19:51:31.093655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:111904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.916 [2024-11-26 19:51:31.093659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.916 [2024-11-26 19:51:31.093664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:111912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.916 [2024-11-26 19:51:31.093669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.916 [2024-11-26 19:51:31.093674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:111920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.916 [2024-11-26 19:51:31.093679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.916 [2024-11-26 19:51:31.093685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:111928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.916 [2024-11-26 19:51:31.093689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.916 [2024-11-26 19:51:31.093695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:111936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.916 [2024-11-26 19:51:31.093710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.916 [2024-11-26 19:51:31.093716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.916 [2024-11-26 19:51:31.093720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.916 [2024-11-26 19:51:31.093725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:111952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.916 [2024-11-26 19:51:31.093729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.916 [2024-11-26 19:51:31.093735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:111288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.916 [2024-11-26 19:51:31.093739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.916 [2024-11-26 19:51:31.093745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:111296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.916 [2024-11-26 19:51:31.093750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.916 [2024-11-26 19:51:31.093756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:111304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.916 [2024-11-26 19:51:31.093760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.916 [2024-11-26 19:51:31.093777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:111312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.916 [2024-11-26 19:51:31.093781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.916 [2024-11-26 19:51:31.093789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:111320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.916 [2024-11-26 19:51:31.093794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.916 [2024-11-26 19:51:31.093801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:111328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.916 [2024-11-26 19:51:31.093806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.916 [2024-11-26 19:51:31.093812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:111336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.916 [2024-11-26 19:51:31.093817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.916 [2024-11-26 19:51:31.093823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:111344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.916 [2024-11-26 19:51:31.093827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.916 [2024-11-26 19:51:31.093833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:111352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.916 [2024-11-26 19:51:31.093838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.916 [2024-11-26 19:51:31.093845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:111360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.916 [2024-11-26 19:51:31.093849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.916 [2024-11-26 19:51:31.093855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:111368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.916 [2024-11-26 19:51:31.093860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.916 [2024-11-26 19:51:31.093866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:111376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.916 [2024-11-26 19:51:31.093870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.917 [2024-11-26 19:51:31.093877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:111384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.917 [2024-11-26 19:51:31.093882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.917 [2024-11-26 19:51:31.093888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:111392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.917 [2024-11-26 19:51:31.093893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.917 [2024-11-26 19:51:31.093899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:111400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.917 [2024-11-26 19:51:31.093904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.917 [2024-11-26 19:51:31.093911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:111408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.917 [2024-11-26 19:51:31.093916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.917 [2024-11-26 19:51:31.093922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:111960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.917 [2024-11-26 19:51:31.093927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.917 [2024-11-26 19:51:31.093934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:111968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.917 [2024-11-26 19:51:31.093938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.917 [2024-11-26 19:51:31.093946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:111976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.917 [2024-11-26 19:51:31.093951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.917 [2024-11-26 19:51:31.093957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:111984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.917 [2024-11-26 19:51:31.093961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.917 [2024-11-26 19:51:31.093967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:111992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.917 [2024-11-26 19:51:31.093971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.917 [2024-11-26 19:51:31.093978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:112000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.917 [2024-11-26 19:51:31.093991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.917 [2024-11-26 19:51:31.093998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:112008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.917 [2024-11-26 19:51:31.094002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.917 [2024-11-26 19:51:31.094009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:112016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.917 [2024-11-26 19:51:31.094013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.917 [2024-11-26 19:51:31.094019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:112024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.917 [2024-11-26 19:51:31.094024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.917 [2024-11-26 19:51:31.094030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:112032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.917 [2024-11-26 19:51:31.094035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.917 [2024-11-26 19:51:31.094041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:112040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.917 [2024-11-26 19:51:31.094046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.917 [2024-11-26 19:51:31.094052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:112048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.917 [2024-11-26 19:51:31.094056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.917 [2024-11-26 19:51:31.094062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:112056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.917 [2024-11-26 19:51:31.094067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.917 [2024-11-26 19:51:31.094073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:112064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.917 [2024-11-26 19:51:31.094077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.917 [2024-11-26 19:51:31.094084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:111416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.917 [2024-11-26 19:51:31.094089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.917 [2024-11-26 19:51:31.094095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:111424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.917 [2024-11-26 19:51:31.094099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.917 [2024-11-26 19:51:31.094105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:111432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.917 [2024-11-26 19:51:31.094110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.917 [2024-11-26 19:51:31.094116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:111440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.917 [2024-11-26 19:51:31.094120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.917 [2024-11-26 19:51:31.094127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:111448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.917 [2024-11-26 19:51:31.094132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.917 [2024-11-26 19:51:31.094138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:111456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.917 [2024-11-26 19:51:31.094143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.917 [2024-11-26 19:51:31.094149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:111464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.917 [2024-11-26 19:51:31.094153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.917 [2024-11-26 19:51:31.094159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:111472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.917 [2024-11-26 19:51:31.094167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.917 [2024-11-26 19:51:31.094173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:112072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.917 [2024-11-26 19:51:31.094178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.917 [2024-11-26 19:51:31.094184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:112080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.917 [2024-11-26 19:51:31.094188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.917 [2024-11-26 19:51:31.094195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:112088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.917 [2024-11-26 19:51:31.094200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.917 [2024-11-26 19:51:31.094206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:112096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.918 [2024-11-26 19:51:31.094210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.918 [2024-11-26 19:51:31.094216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:112104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.918 [2024-11-26 19:51:31.094220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.918 [2024-11-26 19:51:31.094228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:112112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.918 [2024-11-26 19:51:31.094233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.918 [2024-11-26 19:51:31.094240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:112120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.918 [2024-11-26 19:51:31.094245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.918 [2024-11-26 19:51:31.094251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:112128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.918 [2024-11-26 19:51:31.094255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.918 [2024-11-26 19:51:31.094261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:112136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.918 [2024-11-26 19:51:31.094265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.918 [2024-11-26 19:51:31.094271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:112144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.918 [2024-11-26 19:51:31.094275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.918 [2024-11-26 19:51:31.094281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:112152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.918 [2024-11-26 19:51:31.094285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.918 [2024-11-26 19:51:31.094290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:112160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.918 [2024-11-26 19:51:31.094296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.918 [2024-11-26 19:51:31.094303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:112168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.918 [2024-11-26 19:51:31.094307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.918 [2024-11-26 19:51:31.094314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:112176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.918 [2024-11-26 19:51:31.094318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.918 [2024-11-26 19:51:31.094324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:111480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.918 [2024-11-26 19:51:31.094329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.918 [2024-11-26 19:51:31.094336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:111488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.918 [2024-11-26 19:51:31.094344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.918 [2024-11-26 19:51:31.094350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:111496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.918 [2024-11-26 19:51:31.094355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.918 [2024-11-26 19:51:31.094361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:111504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.918 [2024-11-26 19:51:31.094365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.918 [2024-11-26 19:51:31.094371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:111512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.918 [2024-11-26 19:51:31.094376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.918 [2024-11-26 19:51:31.094382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:111520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.918 [2024-11-26 19:51:31.094387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.918 [2024-11-26 19:51:31.094393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:111528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.918 [2024-11-26 19:51:31.094397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.918 [2024-11-26 19:51:31.094404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:111536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.918 [2024-11-26 19:51:31.094408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.918 [2024-11-26 19:51:31.094415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:111544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.918 [2024-11-26 19:51:31.094419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.918 [2024-11-26 19:51:31.094426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:111552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.918 [2024-11-26 19:51:31.094430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.918 [2024-11-26 19:51:31.094437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:111560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.918 [2024-11-26 19:51:31.094441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.918 [2024-11-26 19:51:31.094447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:111568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.918 [2024-11-26 19:51:31.094452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.918 [2024-11-26 19:51:31.094458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:111576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.918 [2024-11-26 19:51:31.094462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.918 [2024-11-26 19:51:31.094468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.918 [2024-11-26 19:51:31.094473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.918 [2024-11-26 19:51:31.094480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:111592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.918 [2024-11-26 19:51:31.094485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.918 [2024-11-26 19:51:31.094491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ea50 is same with the state(6) to be set 00:17:35.918 [2024-11-26 19:51:31.094498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:35.918 [2024-11-26 19:51:31.094502] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:35.918 [2024-11-26 19:51:31.094506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111600 len:8 PRP1 0x0 PRP2 0x0 00:17:35.918 [2024-11-26 19:51:31.094510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.918 [2024-11-26 19:51:31.094760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:17:35.918 [2024-11-26 19:51:31.094830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20dee50 (9): Bad file descriptor 00:17:35.918 [2024-11-26 19:51:31.094899] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:35.918 [2024-11-26 19:51:31.094913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20dee50 with addr=10.0.0.3, port=4420 00:17:35.918 [2024-11-26 19:51:31.094919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dee50 is same with the state(6) to be set 00:17:35.918 [2024-11-26 19:51:31.094928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20dee50 (9): Bad file descriptor 00:17:35.918 [2024-11-26 19:51:31.094937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:17:35.918 [2024-11-26 19:51:31.094943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:17:35.918 [2024-11-26 19:51:31.094949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:17:35.918 [2024-11-26 19:51:31.094956] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:17:35.919 [2024-11-26 19:51:31.094961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:17:35.919 19:51:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:17:36.894 6947.50 IOPS, 27.14 MiB/s [2024-11-26T19:51:32.141Z] [2024-11-26 19:51:32.095072] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:36.894 [2024-11-26 19:51:32.095122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20dee50 with addr=10.0.0.3, port=4420 00:17:36.894 [2024-11-26 19:51:32.095130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dee50 is same with the state(6) to be set 00:17:36.894 [2024-11-26 19:51:32.095144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20dee50 (9): Bad file descriptor 00:17:36.894 [2024-11-26 19:51:32.095155] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:17:36.894 [2024-11-26 19:51:32.095159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:17:36.894 [2024-11-26 19:51:32.095168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:17:36.894 [2024-11-26 19:51:32.095175] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:17:36.894 [2024-11-26 19:51:32.095181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:17:37.178 19:51:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:37.178 [2024-11-26 19:51:32.308103] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:37.178 19:51:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 80737 00:17:37.999 4631.67 IOPS, 18.09 MiB/s [2024-11-26T19:51:33.246Z] [2024-11-26 19:51:33.109528] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:17:39.866 3473.75 IOPS, 13.57 MiB/s [2024-11-26T19:51:36.046Z] 5099.20 IOPS, 19.92 MiB/s [2024-11-26T19:51:37.422Z] 6444.00 IOPS, 25.17 MiB/s [2024-11-26T19:51:37.988Z] 7400.00 IOPS, 28.91 MiB/s [2024-11-26T19:51:39.360Z] 8117.50 IOPS, 31.71 MiB/s [2024-11-26T19:51:40.294Z] 8672.89 IOPS, 33.88 MiB/s [2024-11-26T19:51:40.294Z] 9111.20 IOPS, 35.59 MiB/s 00:17:45.047 Latency(us) 00:17:45.047 [2024-11-26T19:51:40.294Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.047 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:45.047 Verification LBA range: start 0x0 length 0x4000 00:17:45.047 NVMe0n1 : 10.01 9115.80 35.61 0.00 0.00 14025.74 957.83 3019898.88 00:17:45.047 [2024-11-26T19:51:40.294Z] =================================================================================================================== 00:17:45.047 [2024-11-26T19:51:40.294Z] Total : 9115.80 35.61 0.00 0.00 14025.74 957.83 3019898.88 00:17:45.047 { 00:17:45.047 "results": [ 00:17:45.047 { 00:17:45.047 "job": "NVMe0n1", 00:17:45.047 "core_mask": "0x4", 00:17:45.047 "workload": "verify", 00:17:45.047 "status": "finished", 00:17:45.047 "verify_range": { 00:17:45.047 "start": 0, 00:17:45.047 "length": 16384 00:17:45.047 }, 00:17:45.047 "queue_depth": 128, 00:17:45.047 "io_size": 4096, 00:17:45.047 "runtime": 10.007235, 00:17:45.047 "iops": 9115.804715288488, 00:17:45.047 "mibps": 35.60861216909566, 00:17:45.047 "io_failed": 0, 00:17:45.047 "io_timeout": 0, 00:17:45.047 "avg_latency_us": 14025.735577816902, 00:17:45.047 "min_latency_us": 957.8338461538461, 00:17:45.047 "max_latency_us": 3019898.88 00:17:45.047 } 00:17:45.047 ], 00:17:45.047 "core_count": 1 00:17:45.047 } 00:17:45.047 19:51:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=80847 00:17:45.047 19:51:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:17:45.047 19:51:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:45.047 Running I/O for 10 seconds... 00:17:45.982 19:51:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:45.982 12436.00 IOPS, 48.58 MiB/s [2024-11-26T19:51:41.229Z] [2024-11-26 19:51:41.198908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:45.982 [2024-11-26 19:51:41.198959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.982 [2024-11-26 19:51:41.198968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:45.982 [2024-11-26 19:51:41.198974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.982 [2024-11-26 19:51:41.198981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:45.982 [2024-11-26 19:51:41.198987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.982 [2024-11-26 19:51:41.198994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:45.982 [2024-11-26 19:51:41.198999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.982 [2024-11-26 19:51:41.199005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dee50 is same with the state(6) to be set 00:17:45.982 [2024-11-26 19:51:41.199050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:110872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.982 [2024-11-26 19:51:41.199058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.982 [2024-11-26 19:51:41.199070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:110880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.982 [2024-11-26 19:51:41.199075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.982 [2024-11-26 19:51:41.199083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:110888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.982 [2024-11-26 19:51:41.199088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.982 [2024-11-26 19:51:41.199095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:110896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.982 [2024-11-26 19:51:41.199101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.982 [2024-11-26 19:51:41.199122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:110904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.982 [2024-11-26 19:51:41.199128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.982 [2024-11-26 19:51:41.199135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:110912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.982 [2024-11-26 19:51:41.199140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.982 [2024-11-26 19:51:41.199148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:110232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.982 [2024-11-26 19:51:41.199153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.982 [2024-11-26 19:51:41.199160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:110240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.982 [2024-11-26 19:51:41.199166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.982 [2024-11-26 19:51:41.199173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:110248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.982 [2024-11-26 19:51:41.199178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.982 [2024-11-26 19:51:41.199185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:110256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.982 [2024-11-26 19:51:41.199191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.983 [2024-11-26 19:51:41.199198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:110264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.983 [2024-11-26 19:51:41.199203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.983 [2024-11-26 19:51:41.199211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:110272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.983 [2024-11-26 19:51:41.199216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.983 [2024-11-26 19:51:41.199225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:110280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.983 [2024-11-26 19:51:41.199231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.983 [2024-11-26 19:51:41.199238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:110288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.983 [2024-11-26 19:51:41.199244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.983 [2024-11-26 19:51:41.199251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:110296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.983 [2024-11-26 19:51:41.199257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.983 [2024-11-26 19:51:41.199264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:110304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.983 [2024-11-26 19:51:41.199269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.983 [2024-11-26 19:51:41.199277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:110312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.983 [2024-11-26 19:51:41.199282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.983 [2024-11-26 19:51:41.199289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:110320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.983 [2024-11-26 19:51:41.199294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.983 [2024-11-26 19:51:41.199302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:110328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.983 [2024-11-26 19:51:41.199307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.983 [2024-11-26 19:51:41.199314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:110336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.983 [2024-11-26 19:51:41.199319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.983 [2024-11-26 19:51:41.199327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:110344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.983 [2024-11-26 19:51:41.199332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.983 [2024-11-26 19:51:41.199339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:110352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.983 [2024-11-26 19:51:41.199344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.983 [2024-11-26 19:51:41.199352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:110920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.983 [2024-11-26 19:51:41.199357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.983 [2024-11-26 19:51:41.199364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:110928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.983 [2024-11-26 19:51:41.199369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.983 [2024-11-26 19:51:41.199377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:110936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.983 [2024-11-26 19:51:41.199382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.983 [2024-11-26 19:51:41.199389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:110944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.983 [2024-11-26 19:51:41.199394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.983 [2024-11-26 19:51:41.199408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:110952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.983 [2024-11-26 19:51:41.199414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.983 [2024-11-26 19:51:41.199421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.983 [2024-11-26 19:51:41.199427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.983 [2024-11-26 19:51:41.199434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:110968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.983 [2024-11-26 19:51:41.199440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.983 [2024-11-26 19:51:41.199447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:110976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.983 [2024-11-26 19:51:41.199452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.983 [2024-11-26 19:51:41.199459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:110984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.983 [2024-11-26 19:51:41.199465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.983 [2024-11-26 19:51:41.199472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:110992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.983 [2024-11-26 19:51:41.199477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.983 [2024-11-26 19:51:41.199490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:111000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.983 [2024-11-26 19:51:41.199495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.983 [2024-11-26 19:51:41.199502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:111008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.983 [2024-11-26 19:51:41.199508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.983 [2024-11-26 19:51:41.199515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:111016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.983 [2024-11-26 19:51:41.199521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.983 [2024-11-26 19:51:41.199529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:111024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.983 [2024-11-26 19:51:41.199534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.983 [2024-11-26 19:51:41.199541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:111032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.983 [2024-11-26 19:51:41.199546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.983 [2024-11-26 19:51:41.199553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.983 [2024-11-26 19:51:41.199559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.983 [2024-11-26 19:51:41.199566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.983 [2024-11-26 19:51:41.199571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.983 [2024-11-26 19:51:41.199578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:110376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.983 [2024-11-26 19:51:41.199583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.983 [2024-11-26 19:51:41.199591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:110384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.983 [2024-11-26 19:51:41.199596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.983 [2024-11-26 19:51:41.199603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:110392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.983 [2024-11-26 19:51:41.199608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.983 [2024-11-26 19:51:41.199616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.983 [2024-11-26 19:51:41.199621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.983 [2024-11-26 19:51:41.199628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:110408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.983 [2024-11-26 19:51:41.199633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.983 [2024-11-26 19:51:41.199641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:110416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.983 [2024-11-26 19:51:41.199646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.983 [2024-11-26 19:51:41.199654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:110424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.983 [2024-11-26 19:51:41.199659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.983 [2024-11-26 19:51:41.199666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.983 [2024-11-26 19:51:41.199671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.983 [2024-11-26 19:51:41.199678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:110440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.983 [2024-11-26 19:51:41.199684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.983 [2024-11-26 19:51:41.199691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:110448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.983 [2024-11-26 19:51:41.199697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.983 [2024-11-26 19:51:41.199704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:110456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.984 [2024-11-26 19:51:41.199709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.984 [2024-11-26 19:51:41.199716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:110464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.984 [2024-11-26 19:51:41.199722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.984 [2024-11-26 19:51:41.199729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:110472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.984 [2024-11-26 19:51:41.199734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.984 [2024-11-26 19:51:41.199742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:110480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.984 [2024-11-26 19:51:41.199747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.984 [2024-11-26 19:51:41.199754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:111040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.984 [2024-11-26 19:51:41.199759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.984 [2024-11-26 19:51:41.199776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:111048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.984 [2024-11-26 19:51:41.199782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.984 [2024-11-26 19:51:41.199789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:111056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.984 [2024-11-26 19:51:41.199795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.984 [2024-11-26 19:51:41.199802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:111064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.984 [2024-11-26 19:51:41.199807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.984 [2024-11-26 19:51:41.199814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:111072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.984 [2024-11-26 19:51:41.199819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.984 [2024-11-26 19:51:41.199826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:111080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.984 [2024-11-26 19:51:41.199831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.984 [2024-11-26 19:51:41.199838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:111088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.984 [2024-11-26 19:51:41.199843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.984 [2024-11-26 19:51:41.199851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:111096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.984 [2024-11-26 19:51:41.199857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.984 [2024-11-26 19:51:41.199865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:111104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.984 [2024-11-26 19:51:41.199870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.984 [2024-11-26 19:51:41.199877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:111112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.984 [2024-11-26 19:51:41.199882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.984 [2024-11-26 19:51:41.199890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:111120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.984 [2024-11-26 19:51:41.199896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.984 [2024-11-26 19:51:41.199904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:110488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.984 [2024-11-26 19:51:41.199909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.984 [2024-11-26 19:51:41.199916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:110496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.984 [2024-11-26 19:51:41.199922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.984 [2024-11-26 19:51:41.199929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:110504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.984 [2024-11-26 19:51:41.199934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.984 [2024-11-26 19:51:41.199942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.984 [2024-11-26 19:51:41.199948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.984 [2024-11-26 19:51:41.199955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:110520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.984 [2024-11-26 19:51:41.199960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.984 [2024-11-26 19:51:41.199967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:110528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.984 [2024-11-26 19:51:41.199972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.984 [2024-11-26 19:51:41.199980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:110536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.984 [2024-11-26 19:51:41.199985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.984 [2024-11-26 19:51:41.199992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:110544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.984 [2024-11-26 19:51:41.199997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.984 [2024-11-26 19:51:41.200005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:110552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.984 [2024-11-26 19:51:41.200010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.984 [2024-11-26 19:51:41.200017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:110560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.984 [2024-11-26 19:51:41.200023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.984 [2024-11-26 19:51:41.200030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:110568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.984 [2024-11-26 19:51:41.200035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.984 [2024-11-26 19:51:41.200042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:110576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.984 [2024-11-26 19:51:41.200047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.984 [2024-11-26 19:51:41.200055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:110584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.984 [2024-11-26 19:51:41.200061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.984 [2024-11-26 19:51:41.200068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:110592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.984 [2024-11-26 19:51:41.200073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.984 [2024-11-26 19:51:41.200080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.984 [2024-11-26 19:51:41.200085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.984 [2024-11-26 19:51:41.200093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:110608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.984 [2024-11-26 19:51:41.200098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.984 [2024-11-26 19:51:41.200105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:111128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.984 [2024-11-26 19:51:41.200111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.984 [2024-11-26 19:51:41.200119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:111136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.984 [2024-11-26 19:51:41.200124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.984 [2024-11-26 19:51:41.200132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:111144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.984 [2024-11-26 19:51:41.200137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.984 [2024-11-26 19:51:41.200144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:111152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.984 [2024-11-26 19:51:41.200149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.984 [2024-11-26 19:51:41.200156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:111160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.984 [2024-11-26 19:51:41.200162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.984 [2024-11-26 19:51:41.200169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:111168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.984 [2024-11-26 19:51:41.200174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.984 [2024-11-26 19:51:41.200181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:111176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.984 [2024-11-26 19:51:41.200186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.984 [2024-11-26 19:51:41.200193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:111184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.984 [2024-11-26 19:51:41.200199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.984 [2024-11-26 19:51:41.200206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:110616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.984 [2024-11-26 19:51:41.200212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.984 [2024-11-26 19:51:41.200219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:110624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.985 [2024-11-26 19:51:41.200224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.985 [2024-11-26 19:51:41.200232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:110632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.985 [2024-11-26 19:51:41.200237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.985 [2024-11-26 19:51:41.200246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:110640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.985 [2024-11-26 19:51:41.200251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.985 [2024-11-26 19:51:41.200259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:110648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.985 [2024-11-26 19:51:41.200264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.985 [2024-11-26 19:51:41.200277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:110656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.985 [2024-11-26 19:51:41.200283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.985 [2024-11-26 19:51:41.200290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:110664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.985 [2024-11-26 19:51:41.200295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.985 [2024-11-26 19:51:41.200303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:110672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.985 [2024-11-26 19:51:41.200309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.985 [2024-11-26 19:51:41.200316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:110680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.985 [2024-11-26 19:51:41.200322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.985 [2024-11-26 19:51:41.200329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:110688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.985 [2024-11-26 19:51:41.200334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.985 [2024-11-26 19:51:41.200341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:110696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.985 [2024-11-26 19:51:41.200347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.985 [2024-11-26 19:51:41.200354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:110704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.985 [2024-11-26 19:51:41.200359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.985 [2024-11-26 19:51:41.200367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:110712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.985 [2024-11-26 19:51:41.200372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.985 [2024-11-26 19:51:41.200379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:110720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.985 [2024-11-26 19:51:41.200384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.985 [2024-11-26 19:51:41.200392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:110728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.985 [2024-11-26 19:51:41.200397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.985 [2024-11-26 19:51:41.200405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:110736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.985 [2024-11-26 19:51:41.200410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.985 [2024-11-26 19:51:41.200417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:110744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.985 [2024-11-26 19:51:41.200422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.985 [2024-11-26 19:51:41.200429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:110752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.985 [2024-11-26 19:51:41.200436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.985 [2024-11-26 19:51:41.200443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:110760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.985 [2024-11-26 19:51:41.200448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.985 [2024-11-26 19:51:41.200455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:110768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.985 [2024-11-26 19:51:41.200461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.985 [2024-11-26 19:51:41.200468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:110776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.985 [2024-11-26 19:51:41.200474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.985 [2024-11-26 19:51:41.200484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:110784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.985 [2024-11-26 19:51:41.200489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.985 [2024-11-26 19:51:41.200496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:110792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.985 [2024-11-26 19:51:41.200503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.985 [2024-11-26 19:51:41.200510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:110800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.985 [2024-11-26 19:51:41.200516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.985 [2024-11-26 19:51:41.200523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:111192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.985 [2024-11-26 19:51:41.200528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.985 [2024-11-26 19:51:41.200536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:111200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.985 [2024-11-26 19:51:41.200541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.985 [2024-11-26 19:51:41.200549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:111208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.985 [2024-11-26 19:51:41.200554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.985 [2024-11-26 19:51:41.200561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:111216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.985 [2024-11-26 19:51:41.200567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.985 [2024-11-26 19:51:41.200574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:111224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.985 [2024-11-26 19:51:41.200579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.985 [2024-11-26 19:51:41.200586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:111232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.985 [2024-11-26 19:51:41.200592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.985 [2024-11-26 19:51:41.200599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.985 [2024-11-26 19:51:41.200604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.985 [2024-11-26 19:51:41.200611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:111248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.985 [2024-11-26 19:51:41.200616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.985 [2024-11-26 19:51:41.200624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:110808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.985 [2024-11-26 19:51:41.200629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.985 [2024-11-26 19:51:41.200636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:110816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.985 [2024-11-26 19:51:41.200642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.985 [2024-11-26 19:51:41.200649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:110824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.985 [2024-11-26 19:51:41.200654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.985 [2024-11-26 19:51:41.200661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:110832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.985 [2024-11-26 19:51:41.200666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.985 [2024-11-26 19:51:41.200674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:110840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.985 [2024-11-26 19:51:41.200679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.985 [2024-11-26 19:51:41.200689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:110848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.985 [2024-11-26 19:51:41.200694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.985 [2024-11-26 19:51:41.200701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:110856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.985 [2024-11-26 19:51:41.200706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.985 [2024-11-26 19:51:41.200732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:45.985 [2024-11-26 19:51:41.200743] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:45.985 [2024-11-26 19:51:41.200748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110864 len:8 PRP1 0x0 PRP2 0x0 00:17:45.985 [2024-11-26 19:51:41.200754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.985 [2024-11-26 19:51:41.201010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:17:45.986 [2024-11-26 19:51:41.201033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20dee50 (9): Bad file descriptor 00:17:45.986 [2024-11-26 19:51:41.201102] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:45.986 [2024-11-26 19:51:41.201113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20dee50 with addr=10.0.0.3, port=4420 00:17:45.986 [2024-11-26 19:51:41.201120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dee50 is same with the state(6) to be set 00:17:45.986 [2024-11-26 19:51:41.201130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20dee50 (9): Bad file descriptor 00:17:45.986 [2024-11-26 19:51:41.201140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:17:45.986 [2024-11-26 19:51:41.201146] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:17:45.986 [2024-11-26 19:51:41.201152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:17:45.986 [2024-11-26 19:51:41.201158] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:17:45.986 [2024-11-26 19:51:41.201165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:17:45.986 19:51:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:17:47.176 6889.50 IOPS, 26.91 MiB/s [2024-11-26T19:51:42.423Z] [2024-11-26 19:51:42.201252] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:47.176 [2024-11-26 19:51:42.201295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20dee50 with addr=10.0.0.3, port=4420 00:17:47.176 [2024-11-26 19:51:42.201302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dee50 is same with the state(6) to be set 00:17:47.176 [2024-11-26 19:51:42.201313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20dee50 (9): Bad file descriptor 00:17:47.176 [2024-11-26 19:51:42.201322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:17:47.176 [2024-11-26 19:51:42.201327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:17:47.176 [2024-11-26 19:51:42.201332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:17:47.176 [2024-11-26 19:51:42.201338] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:17:47.176 [2024-11-26 19:51:42.201344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:17:48.109 4593.00 IOPS, 17.94 MiB/s [2024-11-26T19:51:43.356Z] [2024-11-26 19:51:43.201495] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:48.109 [2024-11-26 19:51:43.201557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20dee50 with addr=10.0.0.3, port=4420 00:17:48.109 [2024-11-26 19:51:43.201569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dee50 is same with the state(6) to be set 00:17:48.109 [2024-11-26 19:51:43.201587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20dee50 (9): Bad file descriptor 00:17:48.109 [2024-11-26 19:51:43.201608] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:17:48.109 [2024-11-26 19:51:43.201615] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:17:48.109 [2024-11-26 19:51:43.201623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:17:48.109 [2024-11-26 19:51:43.201632] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:17:48.109 [2024-11-26 19:51:43.201639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:17:49.043 3444.75 IOPS, 13.46 MiB/s [2024-11-26T19:51:44.290Z] [2024-11-26 19:51:44.204270] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:49.043 [2024-11-26 19:51:44.204308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20dee50 with addr=10.0.0.3, port=4420 00:17:49.043 [2024-11-26 19:51:44.204316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dee50 is same with the state(6) to be set 00:17:49.043 [2024-11-26 19:51:44.204482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20dee50 (9): Bad file descriptor 00:17:49.043 [2024-11-26 19:51:44.204642] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:17:49.043 [2024-11-26 19:51:44.204650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:17:49.043 [2024-11-26 19:51:44.204656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:17:49.043 [2024-11-26 19:51:44.204662] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:17:49.043 [2024-11-26 19:51:44.204668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:17:49.043 19:51:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:49.301 [2024-11-26 19:51:44.372627] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:49.301 19:51:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 80847 00:17:50.179 2755.80 IOPS, 10.76 MiB/s [2024-11-26T19:51:45.426Z] [2024-11-26 19:51:45.232370] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:17:52.048 4251.00 IOPS, 16.61 MiB/s [2024-11-26T19:51:48.228Z] 5586.57 IOPS, 21.82 MiB/s [2024-11-26T19:51:49.162Z] 6584.25 IOPS, 25.72 MiB/s [2024-11-26T19:51:50.119Z] 7363.78 IOPS, 28.76 MiB/s [2024-11-26T19:51:50.119Z] 7977.80 IOPS, 31.16 MiB/s 00:17:54.872 Latency(us) 00:17:54.872 [2024-11-26T19:51:50.119Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.872 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:54.872 Verification LBA range: start 0x0 length 0x4000 00:17:54.872 NVMe0n1 : 10.01 7983.63 31.19 5628.30 0.00 9377.30 444.26 3006993.33 00:17:54.872 [2024-11-26T19:51:50.119Z] =================================================================================================================== 00:17:54.872 [2024-11-26T19:51:50.119Z] Total : 7983.63 31.19 5628.30 0.00 9377.30 0.00 3006993.33 00:17:54.872 { 00:17:54.872 "results": [ 00:17:54.872 { 00:17:54.872 "job": "NVMe0n1", 00:17:54.872 "core_mask": "0x4", 00:17:54.872 "workload": "verify", 00:17:54.872 "status": "finished", 00:17:54.872 "verify_range": { 00:17:54.872 "start": 0, 00:17:54.872 "length": 16384 00:17:54.872 }, 00:17:54.872 "queue_depth": 128, 00:17:54.872 "io_size": 4096, 00:17:54.872 "runtime": 10.006221, 00:17:54.872 "iops": 7983.633381673261, 00:17:54.872 "mibps": 31.186067897161177, 00:17:54.872 "io_failed": 56318, 00:17:54.872 "io_timeout": 0, 00:17:54.872 "avg_latency_us": 9377.296881103684, 00:17:54.872 "min_latency_us": 444.2584615384615, 00:17:54.872 "max_latency_us": 3006993.329230769 00:17:54.872 } 00:17:54.872 ], 00:17:54.872 "core_count": 1 00:17:54.872 } 00:17:55.128 19:51:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 80719 00:17:55.128 19:51:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 80719 ']' 00:17:55.128 19:51:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 80719 00:17:55.128 19:51:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:17:55.128 19:51:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:55.128 19:51:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80719 00:17:55.128 killing process with pid 80719 00:17:55.128 Received shutdown signal, test time was about 10.000000 seconds 00:17:55.128 00:17:55.128 Latency(us) 00:17:55.128 [2024-11-26T19:51:50.375Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.128 [2024-11-26T19:51:50.375Z] =================================================================================================================== 00:17:55.128 [2024-11-26T19:51:50.375Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:55.128 19:51:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:55.128 19:51:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:55.129 19:51:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80719' 00:17:55.129 19:51:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 80719 00:17:55.129 19:51:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 80719 00:17:55.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:55.129 19:51:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=80961 00:17:55.129 19:51:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 80961 /var/tmp/bdevperf.sock 00:17:55.129 19:51:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:17:55.129 19:51:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 80961 ']' 00:17:55.129 19:51:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:55.129 19:51:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:55.129 19:51:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:55.129 19:51:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:55.129 19:51:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:55.129 [2024-11-26 19:51:50.323093] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:17:55.129 [2024-11-26 19:51:50.323175] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80961 ] 00:17:55.385 [2024-11-26 19:51:50.460329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.385 [2024-11-26 19:51:50.498570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:55.385 [2024-11-26 19:51:50.537161] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:55.948 19:51:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:55.948 19:51:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:17:55.948 19:51:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80961 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:17:55.949 19:51:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=80977 00:17:55.949 19:51:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:17:56.205 19:51:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:17:56.461 NVMe0n1 00:17:56.461 19:51:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=81019 00:17:56.461 19:51:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:17:56.461 19:51:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:56.716 Running I/O for 10 seconds... 00:17:57.650 19:51:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:57.650 19685.00 IOPS, 76.89 MiB/s [2024-11-26T19:51:52.897Z] [2024-11-26 19:51:52.860320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.860605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eebac0 is same with the state(6) to be set 00:17:57.650 [2024-11-26 19:51:52.861167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.650 [2024-11-26 19:51:52.861200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.650 [2024-11-26 19:51:52.861216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:117232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.650 [2024-11-26 19:51:52.861222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.650 [2024-11-26 19:51:52.861231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:67720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.650 [2024-11-26 19:51:52.861237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.650 [2024-11-26 19:51:52.861246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:51088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.650 [2024-11-26 19:51:52.861252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.650 [2024-11-26 19:51:52.861259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:35128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.650 [2024-11-26 19:51:52.861265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.650 [2024-11-26 19:51:52.861273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:94648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.650 [2024-11-26 19:51:52.861278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.650 [2024-11-26 19:51:52.861285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.650 [2024-11-26 19:51:52.861290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.650 [2024-11-26 19:51:52.861298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:27672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.650 [2024-11-26 19:51:52.861303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.650 [2024-11-26 19:51:52.861311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:44736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.650 [2024-11-26 19:51:52.861316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.650 [2024-11-26 19:51:52.861323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:26448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.650 [2024-11-26 19:51:52.861329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.650 [2024-11-26 19:51:52.861337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:130296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.650 [2024-11-26 19:51:52.861343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.650 [2024-11-26 19:51:52.861350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:67936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.650 [2024-11-26 19:51:52.861355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.650 [2024-11-26 19:51:52.861363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:124848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.651 [2024-11-26 19:51:52.861369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.651 [2024-11-26 19:51:52.861376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:85520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.651 [2024-11-26 19:51:52.861382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.651 [2024-11-26 19:51:52.861389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.651 [2024-11-26 19:51:52.861395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.651 [2024-11-26 19:51:52.861402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:73608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.651 [2024-11-26 19:51:52.861408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.651 [2024-11-26 19:51:52.861415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:111600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.651 [2024-11-26 19:51:52.861423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.651 [2024-11-26 19:51:52.861431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:18104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.651 [2024-11-26 19:51:52.861436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.651 [2024-11-26 19:51:52.861444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:51080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.651 [2024-11-26 19:51:52.861449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.651 [2024-11-26 19:51:52.861457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:54264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.651 [2024-11-26 19:51:52.861462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.651 [2024-11-26 19:51:52.861470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:109792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.651 [2024-11-26 19:51:52.861475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.651 [2024-11-26 19:51:52.861482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.651 [2024-11-26 19:51:52.861487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.651 [2024-11-26 19:51:52.861495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:50864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.651 [2024-11-26 19:51:52.861500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.651 [2024-11-26 19:51:52.861508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:33216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.651 [2024-11-26 19:51:52.861513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.651 [2024-11-26 19:51:52.861520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.651 [2024-11-26 19:51:52.861526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.651 [2024-11-26 19:51:52.861533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:50216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.651 [2024-11-26 19:51:52.861539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.651 [2024-11-26 19:51:52.861546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.651 [2024-11-26 19:51:52.861552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.651 [2024-11-26 19:51:52.861559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.651 [2024-11-26 19:51:52.861564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.651 [2024-11-26 19:51:52.861572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:35296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.651 [2024-11-26 19:51:52.861577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.651 [2024-11-26 19:51:52.861585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:60944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.651 [2024-11-26 19:51:52.861590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.651 [2024-11-26 19:51:52.861597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:101608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.651 [2024-11-26 19:51:52.861602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.651 [2024-11-26 19:51:52.861610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:49400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.651 [2024-11-26 19:51:52.861615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.651 [2024-11-26 19:51:52.861622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:53312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.651 [2024-11-26 19:51:52.861628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.651 [2024-11-26 19:51:52.861636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:119776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.651 [2024-11-26 19:51:52.861642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.651 [2024-11-26 19:51:52.861649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.651 [2024-11-26 19:51:52.861655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.651 [2024-11-26 19:51:52.861662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.651 [2024-11-26 19:51:52.861667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.651 [2024-11-26 19:51:52.861674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:51168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.651 [2024-11-26 19:51:52.861680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.651 [2024-11-26 19:51:52.861687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:80360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.651 [2024-11-26 19:51:52.861692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.651 [2024-11-26 19:51:52.861700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.651 [2024-11-26 19:51:52.861705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.651 [2024-11-26 19:51:52.861713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:58168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.651 [2024-11-26 19:51:52.861718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.651 [2024-11-26 19:51:52.861726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.651 [2024-11-26 19:51:52.861731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.651 [2024-11-26 19:51:52.861739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.651 [2024-11-26 19:51:52.861745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.651 [2024-11-26 19:51:52.861752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:86944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.651 [2024-11-26 19:51:52.861757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.651 [2024-11-26 19:51:52.861774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:62840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.651 [2024-11-26 19:51:52.861780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.651 [2024-11-26 19:51:52.861787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:59368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.651 [2024-11-26 19:51:52.861792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.651 [2024-11-26 19:51:52.861799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.651 [2024-11-26 19:51:52.861805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.651 [2024-11-26 19:51:52.861812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:103784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.651 [2024-11-26 19:51:52.861818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.651 [2024-11-26 19:51:52.861825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.651 [2024-11-26 19:51:52.861830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.651 [2024-11-26 19:51:52.861838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:40616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.651 [2024-11-26 19:51:52.861844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.651 [2024-11-26 19:51:52.861851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.651 [2024-11-26 19:51:52.861858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.651 [2024-11-26 19:51:52.861866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.651 [2024-11-26 19:51:52.861871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.651 [2024-11-26 19:51:52.861879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:84048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.651 [2024-11-26 19:51:52.861884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.651 [2024-11-26 19:51:52.861892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.652 [2024-11-26 19:51:52.861897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.652 [2024-11-26 19:51:52.861905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.652 [2024-11-26 19:51:52.861910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.652 [2024-11-26 19:51:52.861918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:130560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.652 [2024-11-26 19:51:52.861923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.652 [2024-11-26 19:51:52.861930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:123240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.652 [2024-11-26 19:51:52.861936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.652 [2024-11-26 19:51:52.861943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:50288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.652 [2024-11-26 19:51:52.861949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.652 [2024-11-26 19:51:52.861956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:41360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.652 [2024-11-26 19:51:52.861962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.652 [2024-11-26 19:51:52.861969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:116360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.652 [2024-11-26 19:51:52.861975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.652 [2024-11-26 19:51:52.861982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:89744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.652 [2024-11-26 19:51:52.861987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.652 [2024-11-26 19:51:52.861996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.652 [2024-11-26 19:51:52.862002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.652 [2024-11-26 19:51:52.862009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x632ec0 is same with the state(6) to be set 00:17:57.652 [2024-11-26 19:51:52.862016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.652 [2024-11-26 19:51:52.862021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.652 [2024-11-26 19:51:52.862026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87664 len:8 PRP1 0x0 PRP2 0x0 00:17:57.652 [2024-11-26 19:51:52.862032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.652 [2024-11-26 19:51:52.862039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.652 [2024-11-26 19:51:52.862044] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.652 [2024-11-26 19:51:52.862049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125672 len:8 PRP1 0x0 PRP2 0x0 00:17:57.652 [2024-11-26 19:51:52.862055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.652 [2024-11-26 19:51:52.862061] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.652 [2024-11-26 19:51:52.862066] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.652 [2024-11-26 19:51:52.862071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123304 len:8 PRP1 0x0 PRP2 0x0 00:17:57.652 [2024-11-26 19:51:52.862076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.652 [2024-11-26 19:51:52.862082] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.652 [2024-11-26 19:51:52.862086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.652 [2024-11-26 19:51:52.862091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84000 len:8 PRP1 0x0 PRP2 0x0 00:17:57.652 [2024-11-26 19:51:52.862096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.652 [2024-11-26 19:51:52.862101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.652 [2024-11-26 19:51:52.862106] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.652 [2024-11-26 19:51:52.862110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115552 len:8 PRP1 0x0 PRP2 0x0 00:17:57.652 [2024-11-26 19:51:52.862115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.652 [2024-11-26 19:51:52.862121] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.652 [2024-11-26 19:51:52.862125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.652 [2024-11-26 19:51:52.862130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120776 len:8 PRP1 0x0 PRP2 0x0 00:17:57.652 [2024-11-26 19:51:52.862135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.652 [2024-11-26 19:51:52.862141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.652 [2024-11-26 19:51:52.862145] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.652 [2024-11-26 19:51:52.862150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111176 len:8 PRP1 0x0 PRP2 0x0 00:17:57.652 [2024-11-26 19:51:52.862155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.652 [2024-11-26 19:51:52.862161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.652 [2024-11-26 19:51:52.862165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.652 [2024-11-26 19:51:52.862170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108512 len:8 PRP1 0x0 PRP2 0x0 00:17:57.652 [2024-11-26 19:51:52.862175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.652 [2024-11-26 19:51:52.862181] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.652 [2024-11-26 19:51:52.862190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.652 [2024-11-26 19:51:52.862195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55472 len:8 PRP1 0x0 PRP2 0x0 00:17:57.652 [2024-11-26 19:51:52.862201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.652 [2024-11-26 19:51:52.862207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.652 [2024-11-26 19:51:52.862211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.652 [2024-11-26 19:51:52.862217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6992 len:8 PRP1 0x0 PRP2 0x0 00:17:57.652 [2024-11-26 19:51:52.862223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.652 [2024-11-26 19:51:52.862229] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.652 [2024-11-26 19:51:52.862233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.652 [2024-11-26 19:51:52.862239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77168 len:8 PRP1 0x0 PRP2 0x0 00:17:57.652 [2024-11-26 19:51:52.862244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.652 [2024-11-26 19:51:52.862250] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.652 [2024-11-26 19:51:52.862254] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.652 [2024-11-26 19:51:52.862259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16088 len:8 PRP1 0x0 PRP2 0x0 00:17:57.652 [2024-11-26 19:51:52.862264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.652 [2024-11-26 19:51:52.862270] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.652 [2024-11-26 19:51:52.862274] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.652 [2024-11-26 19:51:52.862279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9240 len:8 PRP1 0x0 PRP2 0x0 00:17:57.652 [2024-11-26 19:51:52.862285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.652 [2024-11-26 19:51:52.862291] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.652 [2024-11-26 19:51:52.862295] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.652 [2024-11-26 19:51:52.862299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103720 len:8 PRP1 0x0 PRP2 0x0 00:17:57.652 [2024-11-26 19:51:52.862305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.652 [2024-11-26 19:51:52.862311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.652 [2024-11-26 19:51:52.862315] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.652 [2024-11-26 19:51:52.862320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62880 len:8 PRP1 0x0 PRP2 0x0 00:17:57.652 [2024-11-26 19:51:52.862325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.652 [2024-11-26 19:51:52.862331] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.652 [2024-11-26 19:51:52.862335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.652 [2024-11-26 19:51:52.862340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39808 len:8 PRP1 0x0 PRP2 0x0 00:17:57.652 [2024-11-26 19:51:52.862345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.652 [2024-11-26 19:51:52.862351] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.652 [2024-11-26 19:51:52.862357] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.652 [2024-11-26 19:51:52.862362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49368 len:8 PRP1 0x0 PRP2 0x0 00:17:57.652 [2024-11-26 19:51:52.862367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.652 [2024-11-26 19:51:52.862373] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.652 [2024-11-26 19:51:52.862377] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.652 [2024-11-26 19:51:52.862382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42592 len:8 PRP1 0x0 PRP2 0x0 00:17:57.652 [2024-11-26 19:51:52.862387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.653 [2024-11-26 19:51:52.862393] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.653 [2024-11-26 19:51:52.862398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.653 [2024-11-26 19:51:52.862402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:58088 len:8 PRP1 0x0 PRP2 0x0 00:17:57.653 [2024-11-26 19:51:52.862408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.653 [2024-11-26 19:51:52.862414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.653 [2024-11-26 19:51:52.862418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.653 [2024-11-26 19:51:52.862423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30136 len:8 PRP1 0x0 PRP2 0x0 00:17:57.653 [2024-11-26 19:51:52.862428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.653 [2024-11-26 19:51:52.862434] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.653 [2024-11-26 19:51:52.862438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.653 [2024-11-26 19:51:52.862443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10184 len:8 PRP1 0x0 PRP2 0x0 00:17:57.653 [2024-11-26 19:51:52.862448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.653 [2024-11-26 19:51:52.862453] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.653 [2024-11-26 19:51:52.862458] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.653 [2024-11-26 19:51:52.862462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49576 len:8 PRP1 0x0 PRP2 0x0 00:17:57.653 [2024-11-26 19:51:52.862467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.653 [2024-11-26 19:51:52.862473] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.653 [2024-11-26 19:51:52.862477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.653 [2024-11-26 19:51:52.862482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103104 len:8 PRP1 0x0 PRP2 0x0 00:17:57.653 [2024-11-26 19:51:52.862487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.653 [2024-11-26 19:51:52.862493] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.653 [2024-11-26 19:51:52.862497] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.653 [2024-11-26 19:51:52.862502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122584 len:8 PRP1 0x0 PRP2 0x0 00:17:57.653 [2024-11-26 19:51:52.862508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.653 [2024-11-26 19:51:52.862514] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.653 [2024-11-26 19:51:52.862520] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.653 [2024-11-26 19:51:52.862525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106256 len:8 PRP1 0x0 PRP2 0x0 00:17:57.653 [2024-11-26 19:51:52.862530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.653 [2024-11-26 19:51:52.862536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.653 [2024-11-26 19:51:52.862541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.653 [2024-11-26 19:51:52.862545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23328 len:8 PRP1 0x0 PRP2 0x0 00:17:57.653 [2024-11-26 19:51:52.862551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.653 [2024-11-26 19:51:52.862560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.653 [2024-11-26 19:51:52.862565] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.653 [2024-11-26 19:51:52.862569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71728 len:8 PRP1 0x0 PRP2 0x0 00:17:57.653 [2024-11-26 19:51:52.862575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.653 [2024-11-26 19:51:52.862581] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.653 [2024-11-26 19:51:52.862584] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.653 [2024-11-26 19:51:52.862589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29144 len:8 PRP1 0x0 PRP2 0x0 00:17:57.653 [2024-11-26 19:51:52.862595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.653 [2024-11-26 19:51:52.862600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.653 [2024-11-26 19:51:52.862604] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.653 [2024-11-26 19:51:52.862609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27272 len:8 PRP1 0x0 PRP2 0x0 00:17:57.653 [2024-11-26 19:51:52.862614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.653 [2024-11-26 19:51:52.862620] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.653 [2024-11-26 19:51:52.862624] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.653 [2024-11-26 19:51:52.862628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22648 len:8 PRP1 0x0 PRP2 0x0 00:17:57.653 [2024-11-26 19:51:52.862634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.653 [2024-11-26 19:51:52.862640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.653 [2024-11-26 19:51:52.862644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.653 [2024-11-26 19:51:52.862648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11200 len:8 PRP1 0x0 PRP2 0x0 00:17:57.653 [2024-11-26 19:51:52.862654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.653 [2024-11-26 19:51:52.862660] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.653 [2024-11-26 19:51:52.862664] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.653 [2024-11-26 19:51:52.862669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123656 len:8 PRP1 0x0 PRP2 0x0 00:17:57.653 [2024-11-26 19:51:52.862674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.653 [2024-11-26 19:51:52.862680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.653 [2024-11-26 19:51:52.862686] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.653 [2024-11-26 19:51:52.862691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121712 len:8 PRP1 0x0 PRP2 0x0 00:17:57.653 [2024-11-26 19:51:52.862696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.653 [2024-11-26 19:51:52.862703] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.653 [2024-11-26 19:51:52.862707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.653 [2024-11-26 19:51:52.862712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98504 len:8 PRP1 0x0 PRP2 0x0 00:17:57.653 [2024-11-26 19:51:52.862717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.653 [2024-11-26 19:51:52.862724] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.653 [2024-11-26 19:51:52.862729] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.653 [2024-11-26 19:51:52.862734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39792 len:8 PRP1 0x0 PRP2 0x0 00:17:57.653 [2024-11-26 19:51:52.862739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.653 [2024-11-26 19:51:52.862745] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.653 [2024-11-26 19:51:52.862749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.653 [2024-11-26 19:51:52.862754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36952 len:8 PRP1 0x0 PRP2 0x0 00:17:57.653 [2024-11-26 19:51:52.862759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.653 [2024-11-26 19:51:52.862771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.653 [2024-11-26 19:51:52.862776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.653 [2024-11-26 19:51:52.862781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98440 len:8 PRP1 0x0 PRP2 0x0 00:17:57.653 [2024-11-26 19:51:52.862786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.653 [2024-11-26 19:51:52.862792] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.653 [2024-11-26 19:51:52.862796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.653 [2024-11-26 19:51:52.862800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52400 len:8 PRP1 0x0 PRP2 0x0 00:17:57.654 [2024-11-26 19:51:52.862806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.654 [2024-11-26 19:51:52.862811] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.654 [2024-11-26 19:51:52.862816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.654 [2024-11-26 19:51:52.862820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64192 len:8 PRP1 0x0 PRP2 0x0 00:17:57.654 [2024-11-26 19:51:52.862826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.654 [2024-11-26 19:51:52.862832] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.654 [2024-11-26 19:51:52.862836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.654 [2024-11-26 19:51:52.862841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119088 len:8 PRP1 0x0 PRP2 0x0 00:17:57.654 [2024-11-26 19:51:52.862846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.654 [2024-11-26 19:51:52.862852] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.654 [2024-11-26 19:51:52.862858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.654 [2024-11-26 19:51:52.862863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22040 len:8 PRP1 0x0 PRP2 0x0 00:17:57.654 [2024-11-26 19:51:52.862868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.654 [2024-11-26 19:51:52.862875] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.654 [2024-11-26 19:51:52.862879] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.654 [2024-11-26 19:51:52.862883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46696 len:8 PRP1 0x0 PRP2 0x0 00:17:57.654 [2024-11-26 19:51:52.862889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.654 [2024-11-26 19:51:52.862896] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.654 [2024-11-26 19:51:52.862900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.654 [2024-11-26 19:51:52.862905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111296 len:8 PRP1 0x0 PRP2 0x0 00:17:57.654 [2024-11-26 19:51:52.862910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.654 [2024-11-26 19:51:52.862916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.654 [2024-11-26 19:51:52.862920] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.654 [2024-11-26 19:51:52.862925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91760 len:8 PRP1 0x0 PRP2 0x0 00:17:57.654 [2024-11-26 19:51:52.862930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.654 [2024-11-26 19:51:52.862936] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.654 [2024-11-26 19:51:52.862940] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.654 [2024-11-26 19:51:52.862944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120288 len:8 PRP1 0x0 PRP2 0x0 00:17:57.654 [2024-11-26 19:51:52.862950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.654 [2024-11-26 19:51:52.862956] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.654 [2024-11-26 19:51:52.862960] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.654 [2024-11-26 19:51:52.862964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35200 len:8 PRP1 0x0 PRP2 0x0 00:17:57.654 [2024-11-26 19:51:52.862969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.654 [2024-11-26 19:51:52.862975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.654 [2024-11-26 19:51:52.862979] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.654 [2024-11-26 19:51:52.862984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37352 len:8 PRP1 0x0 PRP2 0x0 00:17:57.654 [2024-11-26 19:51:52.862990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.654 [2024-11-26 19:51:52.862996] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.654 [2024-11-26 19:51:52.863000] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.654 [2024-11-26 19:51:52.863005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84592 len:8 PRP1 0x0 PRP2 0x0 00:17:57.654 [2024-11-26 19:51:52.863010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.654 [2024-11-26 19:51:52.863016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.654 [2024-11-26 19:51:52.863022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.654 [2024-11-26 19:51:52.863027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67360 len:8 PRP1 0x0 PRP2 0x0 00:17:57.654 [2024-11-26 19:51:52.869258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.654 [2024-11-26 19:51:52.869292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.654 [2024-11-26 19:51:52.869298] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.654 [2024-11-26 19:51:52.869306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45856 len:8 PRP1 0x0 PRP2 0x0 00:17:57.654 [2024-11-26 19:51:52.869314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.654 [2024-11-26 19:51:52.869323] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.654 [2024-11-26 19:51:52.869328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.654 [2024-11-26 19:51:52.869334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40000 len:8 PRP1 0x0 PRP2 0x0 00:17:57.654 [2024-11-26 19:51:52.869341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.654 [2024-11-26 19:51:52.869348] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.654 [2024-11-26 19:51:52.869354] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.654 [2024-11-26 19:51:52.869359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82208 len:8 PRP1 0x0 PRP2 0x0 00:17:57.654 [2024-11-26 19:51:52.869364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.654 [2024-11-26 19:51:52.869370] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.654 [2024-11-26 19:51:52.869374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.654 [2024-11-26 19:51:52.869379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118440 len:8 PRP1 0x0 PRP2 0x0 00:17:57.654 [2024-11-26 19:51:52.869384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.654 [2024-11-26 19:51:52.869390] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.654 [2024-11-26 19:51:52.869395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.654 [2024-11-26 19:51:52.869399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4592 len:8 PRP1 0x0 PRP2 0x0 00:17:57.654 [2024-11-26 19:51:52.869405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.654 [2024-11-26 19:51:52.869410] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.654 [2024-11-26 19:51:52.869415] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.654 [2024-11-26 19:51:52.869419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121672 len:8 PRP1 0x0 PRP2 0x0 00:17:57.654 [2024-11-26 19:51:52.869425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.654 [2024-11-26 19:51:52.869431] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.654 [2024-11-26 19:51:52.869435] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.654 [2024-11-26 19:51:52.869440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83936 len:8 PRP1 0x0 PRP2 0x0 00:17:57.654 [2024-11-26 19:51:52.869445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.654 [2024-11-26 19:51:52.869451] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.654 [2024-11-26 19:51:52.869455] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.654 [2024-11-26 19:51:52.869460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65216 len:8 PRP1 0x0 PRP2 0x0 00:17:57.654 [2024-11-26 19:51:52.869465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.654 [2024-11-26 19:51:52.869471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.654 [2024-11-26 19:51:52.869475] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.655 [2024-11-26 19:51:52.869480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91680 len:8 PRP1 0x0 PRP2 0x0 00:17:57.655 [2024-11-26 19:51:52.869485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.655 [2024-11-26 19:51:52.869491] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.655 [2024-11-26 19:51:52.869495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.655 [2024-11-26 19:51:52.869500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:28176 len:8 PRP1 0x0 PRP2 0x0 00:17:57.655 [2024-11-26 19:51:52.869505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.655 [2024-11-26 19:51:52.869510] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.655 [2024-11-26 19:51:52.869515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.655 [2024-11-26 19:51:52.869519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50576 len:8 PRP1 0x0 PRP2 0x0 00:17:57.655 [2024-11-26 19:51:52.869525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.655 [2024-11-26 19:51:52.869530] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.655 [2024-11-26 19:51:52.869534] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.655 [2024-11-26 19:51:52.869539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10008 len:8 PRP1 0x0 PRP2 0x0 00:17:57.655 [2024-11-26 19:51:52.869544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.655 [2024-11-26 19:51:52.869550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.655 [2024-11-26 19:51:52.869554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.655 [2024-11-26 19:51:52.869559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123200 len:8 PRP1 0x0 PRP2 0x0 00:17:57.655 [2024-11-26 19:51:52.869564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.655 [2024-11-26 19:51:52.869570] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.655 [2024-11-26 19:51:52.869574] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.655 [2024-11-26 19:51:52.869578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47376 len:8 PRP1 0x0 PRP2 0x0 00:17:57.655 [2024-11-26 19:51:52.869583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.655 [2024-11-26 19:51:52.869589] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.655 [2024-11-26 19:51:52.869593] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.655 [2024-11-26 19:51:52.869598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6824 len:8 PRP1 0x0 PRP2 0x0 00:17:57.655 [2024-11-26 19:51:52.869603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.655 [2024-11-26 19:51:52.869609] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.655 [2024-11-26 19:51:52.869613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.655 [2024-11-26 19:51:52.869618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17664 len:8 PRP1 0x0 PRP2 0x0 00:17:57.655 [2024-11-26 19:51:52.869623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.655 [2024-11-26 19:51:52.869628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.655 [2024-11-26 19:51:52.869632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.655 [2024-11-26 19:51:52.869637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120440 len:8 PRP1 0x0 PRP2 0x0 00:17:57.655 [2024-11-26 19:51:52.869642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.655 [2024-11-26 19:51:52.869649] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:57.655 [2024-11-26 19:51:52.869653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:57.655 [2024-11-26 19:51:52.869658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87632 len:8 PRP1 0x0 PRP2 0x0 00:17:57.655 [2024-11-26 19:51:52.869664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.655 [2024-11-26 19:51:52.869791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.655 [2024-11-26 19:51:52.869808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.655 [2024-11-26 19:51:52.869816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.655 [2024-11-26 19:51:52.869821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.655 [2024-11-26 19:51:52.869828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.655 [2024-11-26 19:51:52.869833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.655 [2024-11-26 19:51:52.869840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.655 [2024-11-26 19:51:52.869845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.655 [2024-11-26 19:51:52.869850] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c5e50 is same with the state(6) to be set 00:17:57.655 [2024-11-26 19:51:52.870076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:17:57.655 [2024-11-26 19:51:52.870095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5c5e50 (9): Bad file descriptor 00:17:57.655 [2024-11-26 19:51:52.870167] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:57.655 [2024-11-26 19:51:52.870184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c5e50 with addr=10.0.0.3, port=4420 00:17:57.655 [2024-11-26 19:51:52.870191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c5e50 is same with the state(6) to be set 00:17:57.655 [2024-11-26 19:51:52.870201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5c5e50 (9): Bad file descriptor 00:17:57.655 [2024-11-26 19:51:52.870211] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:17:57.655 [2024-11-26 19:51:52.870216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:17:57.655 [2024-11-26 19:51:52.870223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:17:57.655 [2024-11-26 19:51:52.870230] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:17:57.655 [2024-11-26 19:51:52.870236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:17:57.655 19:51:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 81019 00:17:59.613 10764.50 IOPS, 42.05 MiB/s [2024-11-26T19:51:55.118Z] 7176.33 IOPS, 28.03 MiB/s [2024-11-26T19:51:55.118Z] [2024-11-26 19:51:54.870516] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:59.871 [2024-11-26 19:51:54.870558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c5e50 with addr=10.0.0.3, port=4420 00:17:59.871 [2024-11-26 19:51:54.870566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c5e50 is same with the state(6) to be set 00:17:59.871 [2024-11-26 19:51:54.870579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5c5e50 (9): Bad file descriptor 00:17:59.871 [2024-11-26 19:51:54.870589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:17:59.871 [2024-11-26 19:51:54.870593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:17:59.871 [2024-11-26 19:51:54.870600] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:17:59.871 [2024-11-26 19:51:54.870606] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:17:59.871 [2024-11-26 19:51:54.870612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:18:01.739 5382.25 IOPS, 21.02 MiB/s [2024-11-26T19:51:56.986Z] 4305.80 IOPS, 16.82 MiB/s [2024-11-26T19:51:56.986Z] [2024-11-26 19:51:56.870868] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:01.739 [2024-11-26 19:51:56.870907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c5e50 with addr=10.0.0.3, port=4420 00:18:01.739 [2024-11-26 19:51:56.870915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c5e50 is same with the state(6) to be set 00:18:01.739 [2024-11-26 19:51:56.870929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5c5e50 (9): Bad file descriptor 00:18:01.739 [2024-11-26 19:51:56.870939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:18:01.739 [2024-11-26 19:51:56.870943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:18:01.739 [2024-11-26 19:51:56.870950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:18:01.739 [2024-11-26 19:51:56.870956] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:18:01.739 [2024-11-26 19:51:56.870962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:18:03.606 3588.17 IOPS, 14.02 MiB/s [2024-11-26T19:51:59.111Z] 3075.57 IOPS, 12.01 MiB/s [2024-11-26T19:51:59.111Z] [2024-11-26 19:51:58.871150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:18:03.864 [2024-11-26 19:51:58.871192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:18:03.864 [2024-11-26 19:51:58.871197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:18:03.864 [2024-11-26 19:51:58.871203] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:18:03.864 [2024-11-26 19:51:58.871209] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:18:04.796 2691.12 IOPS, 10.51 MiB/s 00:18:04.796 Latency(us) 00:18:04.796 [2024-11-26T19:52:00.043Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.796 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:18:04.796 NVMe0n1 : 8.12 2652.51 10.36 15.77 0.00 47854.70 6099.89 7020619.62 00:18:04.796 [2024-11-26T19:52:00.043Z] =================================================================================================================== 00:18:04.796 [2024-11-26T19:52:00.043Z] Total : 2652.51 10.36 15.77 0.00 47854.70 6099.89 7020619.62 00:18:04.796 { 00:18:04.796 "results": [ 00:18:04.796 { 00:18:04.796 "job": "NVMe0n1", 00:18:04.796 "core_mask": "0x4", 00:18:04.796 "workload": "randread", 00:18:04.796 "status": "finished", 00:18:04.796 "queue_depth": 128, 00:18:04.796 "io_size": 4096, 00:18:04.796 "runtime": 8.116472, 00:18:04.796 "iops": 2652.5071484260648, 00:18:04.796 "mibps": 10.361356048539315, 00:18:04.796 "io_failed": 128, 00:18:04.796 "io_timeout": 0, 00:18:04.796 "avg_latency_us": 47854.69721113444, 00:18:04.796 "min_latency_us": 6099.88923076923, 00:18:04.796 "max_latency_us": 7020619.618461538 00:18:04.796 } 00:18:04.796 ], 00:18:04.796 "core_count": 1 00:18:04.796 } 00:18:04.796 19:51:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:04.797 Attaching 5 probes... 00:18:04.797 1285.826247: reset bdev controller NVMe0 00:18:04.797 1285.876578: reconnect bdev controller NVMe0 00:18:04.797 3286.029389: reconnect delay bdev controller NVMe0 00:18:04.797 3286.045259: reconnect bdev controller NVMe0 00:18:04.797 5286.548928: reconnect delay bdev controller NVMe0 00:18:04.797 5286.561847: reconnect bdev controller NVMe0 00:18:04.797 7286.886391: reconnect delay bdev controller NVMe0 00:18:04.797 7286.903777: reconnect bdev controller NVMe0 00:18:04.797 19:51:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:18:04.797 19:51:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:18:04.797 19:51:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 80977 00:18:04.797 19:51:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:04.797 19:51:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 80961 00:18:04.797 19:51:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 80961 ']' 00:18:04.797 19:51:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 80961 00:18:04.797 19:51:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:18:04.797 19:51:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:04.797 19:51:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80961 00:18:04.797 killing process with pid 80961 00:18:04.797 Received shutdown signal, test time was about 8.169577 seconds 00:18:04.797 00:18:04.797 Latency(us) 00:18:04.797 [2024-11-26T19:52:00.044Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.797 [2024-11-26T19:52:00.044Z] =================================================================================================================== 00:18:04.797 [2024-11-26T19:52:00.044Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:04.797 19:51:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:04.797 19:51:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:04.797 19:51:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80961' 00:18:04.797 19:51:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 80961 00:18:04.797 19:51:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 80961 00:18:04.797 19:52:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:05.053 19:52:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:18:05.054 19:52:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:18:05.054 19:52:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:05.054 19:52:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:18:05.054 19:52:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:05.054 19:52:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:18:05.054 19:52:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:05.054 19:52:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:05.054 rmmod nvme_tcp 00:18:05.054 rmmod nvme_fabrics 00:18:05.054 rmmod nvme_keyring 00:18:05.054 19:52:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:05.054 19:52:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:18:05.054 19:52:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:18:05.054 19:52:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 80530 ']' 00:18:05.054 19:52:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 80530 00:18:05.054 19:52:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 80530 ']' 00:18:05.054 19:52:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 80530 00:18:05.054 19:52:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:18:05.054 19:52:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:05.054 19:52:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80530 00:18:05.054 killing process with pid 80530 00:18:05.054 19:52:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:05.054 19:52:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:05.054 19:52:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80530' 00:18:05.054 19:52:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 80530 00:18:05.054 19:52:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 80530 00:18:05.313 19:52:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:05.313 19:52:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:05.313 19:52:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:05.313 19:52:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:18:05.313 19:52:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:18:05.313 19:52:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:05.313 19:52:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:18:05.313 19:52:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:05.313 19:52:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:05.313 19:52:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:05.313 19:52:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:05.313 19:52:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:05.313 19:52:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:05.313 19:52:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:05.313 19:52:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:05.313 19:52:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:05.313 19:52:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:05.313 19:52:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:05.313 19:52:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:05.313 19:52:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:05.313 19:52:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:05.313 19:52:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:05.574 19:52:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:05.574 19:52:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:05.574 19:52:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:05.574 19:52:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:05.574 19:52:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:18:05.574 ************************************ 00:18:05.574 END TEST nvmf_timeout 00:18:05.574 ************************************ 00:18:05.574 00:18:05.574 real 0m45.110s 00:18:05.574 user 2m12.074s 00:18:05.574 sys 0m4.471s 00:18:05.574 19:52:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:05.574 19:52:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:05.574 19:52:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:18:05.574 19:52:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:05.574 00:18:05.574 real 4m58.876s 00:18:05.574 user 12m52.028s 00:18:05.574 sys 0m52.612s 00:18:05.574 19:52:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:05.574 ************************************ 00:18:05.574 END TEST nvmf_host 00:18:05.574 ************************************ 00:18:05.574 19:52:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.574 19:52:00 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:18:05.574 19:52:00 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:18:05.574 00:18:05.574 real 11m48.411s 00:18:05.574 user 28m23.284s 00:18:05.574 sys 2m23.377s 00:18:05.574 19:52:00 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:05.574 19:52:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:05.574 ************************************ 00:18:05.574 END TEST nvmf_tcp 00:18:05.574 ************************************ 00:18:05.574 19:52:00 -- spdk/autotest.sh@285 -- # [[ 1 -eq 0 ]] 00:18:05.574 19:52:00 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:18:05.575 19:52:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:05.575 19:52:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:05.575 19:52:00 -- common/autotest_common.sh@10 -- # set +x 00:18:05.575 ************************************ 00:18:05.575 START TEST nvmf_dif 00:18:05.575 ************************************ 00:18:05.575 19:52:00 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:18:05.575 * Looking for test storage... 00:18:05.575 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:05.575 19:52:00 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:05.575 19:52:00 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:05.575 19:52:00 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:18:05.834 19:52:00 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:05.834 19:52:00 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:05.834 19:52:00 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:05.834 19:52:00 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:05.834 19:52:00 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:18:05.834 19:52:00 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:18:05.834 19:52:00 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:18:05.834 19:52:00 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:18:05.834 19:52:00 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:18:05.834 19:52:00 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:18:05.834 19:52:00 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:18:05.834 19:52:00 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:05.834 19:52:00 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:18:05.834 19:52:00 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:18:05.834 19:52:00 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:05.834 19:52:00 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:05.834 19:52:00 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:18:05.834 19:52:00 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:18:05.834 19:52:00 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:05.834 19:52:00 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:18:05.834 19:52:00 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:18:05.834 19:52:00 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:18:05.834 19:52:00 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:18:05.834 19:52:00 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:05.834 19:52:00 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:18:05.834 19:52:00 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:18:05.834 19:52:00 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:05.834 19:52:00 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:05.834 19:52:00 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:18:05.834 19:52:00 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:05.834 19:52:00 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:05.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.834 --rc genhtml_branch_coverage=1 00:18:05.834 --rc genhtml_function_coverage=1 00:18:05.835 --rc genhtml_legend=1 00:18:05.835 --rc geninfo_all_blocks=1 00:18:05.835 --rc geninfo_unexecuted_blocks=1 00:18:05.835 00:18:05.835 ' 00:18:05.835 19:52:00 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:05.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.835 --rc genhtml_branch_coverage=1 00:18:05.835 --rc genhtml_function_coverage=1 00:18:05.835 --rc genhtml_legend=1 00:18:05.835 --rc geninfo_all_blocks=1 00:18:05.835 --rc geninfo_unexecuted_blocks=1 00:18:05.835 00:18:05.835 ' 00:18:05.835 19:52:00 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:05.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.835 --rc genhtml_branch_coverage=1 00:18:05.835 --rc genhtml_function_coverage=1 00:18:05.835 --rc genhtml_legend=1 00:18:05.835 --rc geninfo_all_blocks=1 00:18:05.835 --rc geninfo_unexecuted_blocks=1 00:18:05.835 00:18:05.835 ' 00:18:05.835 19:52:00 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:05.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.835 --rc genhtml_branch_coverage=1 00:18:05.835 --rc genhtml_function_coverage=1 00:18:05.835 --rc genhtml_legend=1 00:18:05.835 --rc geninfo_all_blocks=1 00:18:05.835 --rc geninfo_unexecuted_blocks=1 00:18:05.835 00:18:05.835 ' 00:18:05.835 19:52:00 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=91838eb1-5852-43eb-90b2-09876f360ab2 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:05.835 19:52:00 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:18:05.835 19:52:00 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:05.835 19:52:00 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:05.835 19:52:00 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:05.835 19:52:00 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.835 19:52:00 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.835 19:52:00 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.835 19:52:00 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:18:05.835 19:52:00 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:05.835 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:05.835 19:52:00 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:18:05.835 19:52:00 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:18:05.835 19:52:00 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:18:05.835 19:52:00 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:18:05.835 19:52:00 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:05.835 19:52:00 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:18:05.835 19:52:00 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:05.835 Cannot find device "nvmf_init_br" 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@162 -- # true 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:05.835 Cannot find device "nvmf_init_br2" 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@163 -- # true 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:05.835 Cannot find device "nvmf_tgt_br" 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@164 -- # true 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:05.835 Cannot find device "nvmf_tgt_br2" 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@165 -- # true 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:05.835 Cannot find device "nvmf_init_br" 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@166 -- # true 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:05.835 Cannot find device "nvmf_init_br2" 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@167 -- # true 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:05.835 Cannot find device "nvmf_tgt_br" 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@168 -- # true 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:05.835 Cannot find device "nvmf_tgt_br2" 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@169 -- # true 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:05.835 Cannot find device "nvmf_br" 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@170 -- # true 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:05.835 Cannot find device "nvmf_init_if" 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@171 -- # true 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:05.835 Cannot find device "nvmf_init_if2" 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@172 -- # true 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:05.835 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@173 -- # true 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:05.835 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@174 -- # true 00:18:05.835 19:52:00 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:05.836 19:52:00 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:05.836 19:52:00 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:05.836 19:52:01 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:05.836 19:52:01 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:05.836 19:52:01 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:05.836 19:52:01 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:05.836 19:52:01 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:05.836 19:52:01 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:05.836 19:52:01 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:05.836 19:52:01 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:05.836 19:52:01 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:05.836 19:52:01 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:05.836 19:52:01 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:05.836 19:52:01 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:05.836 19:52:01 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:05.836 19:52:01 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:05.836 19:52:01 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:05.836 19:52:01 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:05.836 19:52:01 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:05.836 19:52:01 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:06.094 19:52:01 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:06.094 19:52:01 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:06.094 19:52:01 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:06.094 19:52:01 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:06.094 19:52:01 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:06.094 19:52:01 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:06.095 19:52:01 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:06.095 19:52:01 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:06.095 19:52:01 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:06.095 19:52:01 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:06.095 19:52:01 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:06.095 19:52:01 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:06.095 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:06.095 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:18:06.095 00:18:06.095 --- 10.0.0.3 ping statistics --- 00:18:06.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:06.095 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:18:06.095 19:52:01 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:06.095 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:06.095 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.032 ms 00:18:06.095 00:18:06.095 --- 10.0.0.4 ping statistics --- 00:18:06.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:06.095 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:18:06.095 19:52:01 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:06.095 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:06.095 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.014 ms 00:18:06.095 00:18:06.095 --- 10.0.0.1 ping statistics --- 00:18:06.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:06.095 rtt min/avg/max/mdev = 0.014/0.014/0.014/0.000 ms 00:18:06.095 19:52:01 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:06.095 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:06.095 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:18:06.095 00:18:06.095 --- 10.0.0.2 ping statistics --- 00:18:06.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:06.095 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:18:06.095 19:52:01 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:06.095 19:52:01 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:18:06.095 19:52:01 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:18:06.095 19:52:01 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:06.352 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:06.352 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:06.352 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:06.352 19:52:01 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:06.352 19:52:01 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:06.352 19:52:01 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:06.352 19:52:01 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:06.352 19:52:01 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:06.352 19:52:01 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:06.352 19:52:01 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:18:06.352 19:52:01 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:18:06.352 19:52:01 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:06.352 19:52:01 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:06.352 19:52:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:18:06.352 19:52:01 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=81501 00:18:06.352 19:52:01 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 81501 00:18:06.352 19:52:01 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 81501 ']' 00:18:06.352 19:52:01 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:06.352 19:52:01 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:06.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:06.352 19:52:01 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:06.352 19:52:01 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:06.352 19:52:01 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:06.352 19:52:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:18:06.352 [2024-11-26 19:52:01.521354] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:18:06.352 [2024-11-26 19:52:01.521400] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:06.609 [2024-11-26 19:52:01.659102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.609 [2024-11-26 19:52:01.693237] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:06.609 [2024-11-26 19:52:01.693273] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:06.609 [2024-11-26 19:52:01.693279] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:06.609 [2024-11-26 19:52:01.693284] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:06.609 [2024-11-26 19:52:01.693288] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:06.609 [2024-11-26 19:52:01.693562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:06.609 [2024-11-26 19:52:01.724046] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:07.223 19:52:02 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:07.223 19:52:02 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:18:07.223 19:52:02 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:07.223 19:52:02 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:07.223 19:52:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:18:07.223 19:52:02 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:07.223 19:52:02 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:18:07.223 19:52:02 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:18:07.223 19:52:02 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.223 19:52:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:18:07.223 [2024-11-26 19:52:02.424473] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:07.223 19:52:02 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.223 19:52:02 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:18:07.223 19:52:02 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:07.223 19:52:02 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:07.223 19:52:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:18:07.223 ************************************ 00:18:07.223 START TEST fio_dif_1_default 00:18:07.223 ************************************ 00:18:07.223 19:52:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:18:07.223 19:52:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:18:07.223 19:52:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:18:07.223 19:52:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:18:07.223 19:52:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:18:07.223 19:52:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:18:07.223 19:52:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:18:07.223 19:52:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.223 19:52:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:18:07.223 bdev_null0 00:18:07.223 19:52:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.223 19:52:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:07.223 19:52:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.223 19:52:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:18:07.223 19:52:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.223 19:52:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:07.223 19:52:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.223 19:52:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:18:07.223 19:52:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.223 19:52:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:07.223 19:52:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.223 19:52:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:18:07.223 [2024-11-26 19:52:02.464538] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:07.481 19:52:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.481 19:52:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:18:07.481 19:52:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:18:07.481 19:52:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:07.481 19:52:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:07.481 19:52:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:18:07.481 19:52:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:07.481 19:52:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:07.481 19:52:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:18:07.481 19:52:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:07.481 19:52:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:07.481 19:52:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:18:07.481 19:52:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:18:07.481 19:52:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:07.481 19:52:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:18:07.481 19:52:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:07.481 19:52:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:18:07.481 19:52:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:18:07.481 19:52:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:07.481 19:52:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:07.481 { 00:18:07.481 "params": { 00:18:07.481 "name": "Nvme$subsystem", 00:18:07.481 "trtype": "$TEST_TRANSPORT", 00:18:07.481 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:07.481 "adrfam": "ipv4", 00:18:07.482 "trsvcid": "$NVMF_PORT", 00:18:07.482 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:07.482 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:07.482 "hdgst": ${hdgst:-false}, 00:18:07.482 "ddgst": ${ddgst:-false} 00:18:07.482 }, 00:18:07.482 "method": "bdev_nvme_attach_controller" 00:18:07.482 } 00:18:07.482 EOF 00:18:07.482 )") 00:18:07.482 19:52:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:07.482 19:52:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:18:07.482 19:52:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:07.482 19:52:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:18:07.482 19:52:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:18:07.482 19:52:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:18:07.482 19:52:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:18:07.482 19:52:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:18:07.482 19:52:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:07.482 "params": { 00:18:07.482 "name": "Nvme0", 00:18:07.482 "trtype": "tcp", 00:18:07.482 "traddr": "10.0.0.3", 00:18:07.482 "adrfam": "ipv4", 00:18:07.482 "trsvcid": "4420", 00:18:07.482 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:07.482 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:07.482 "hdgst": false, 00:18:07.482 "ddgst": false 00:18:07.482 }, 00:18:07.482 "method": "bdev_nvme_attach_controller" 00:18:07.482 }' 00:18:07.482 19:52:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:07.482 19:52:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:07.482 19:52:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:07.482 19:52:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:18:07.482 19:52:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:07.482 19:52:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:07.482 19:52:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:07.482 19:52:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:07.482 19:52:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:07.482 19:52:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:07.482 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:18:07.482 fio-3.35 00:18:07.482 Starting 1 thread 00:18:19.703 00:18:19.703 filename0: (groupid=0, jobs=1): err= 0: pid=81573: Tue Nov 26 19:52:13 2024 00:18:19.703 read: IOPS=11.8k, BW=46.2MiB/s (48.5MB/s)(462MiB/10001msec) 00:18:19.703 slat (usec): min=5, max=619, avg= 7.38, stdev= 2.82 00:18:19.703 clat (usec): min=275, max=2666, avg=317.91, stdev=34.88 00:18:19.703 lat (usec): min=281, max=2699, avg=325.29, stdev=35.69 00:18:19.703 clat percentiles (usec): 00:18:19.703 | 1.00th=[ 281], 5.00th=[ 289], 10.00th=[ 293], 20.00th=[ 297], 00:18:19.703 | 30.00th=[ 302], 40.00th=[ 306], 50.00th=[ 310], 60.00th=[ 314], 00:18:19.703 | 70.00th=[ 318], 80.00th=[ 326], 90.00th=[ 367], 95.00th=[ 379], 00:18:19.703 | 99.00th=[ 396], 99.50th=[ 445], 99.90th=[ 578], 99.95th=[ 742], 00:18:19.703 | 99.99th=[ 1004] 00:18:19.703 bw ( KiB/s): min=40224, max=51552, per=99.79%, avg=47248.84, stdev=3276.62, samples=19 00:18:19.703 iops : min=10056, max=12888, avg=11812.21, stdev=819.15, samples=19 00:18:19.703 lat (usec) : 500=99.71%, 750=0.24%, 1000=0.04% 00:18:19.703 lat (msec) : 2=0.01%, 4=0.01% 00:18:19.703 cpu : usr=88.53%, sys=10.05%, ctx=76, majf=0, minf=9 00:18:19.703 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:19.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:19.703 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:19.703 issued rwts: total=118384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:19.703 latency : target=0, window=0, percentile=100.00%, depth=4 00:18:19.703 00:18:19.703 Run status group 0 (all jobs): 00:18:19.703 READ: bw=46.2MiB/s (48.5MB/s), 46.2MiB/s-46.2MiB/s (48.5MB/s-48.5MB/s), io=462MiB (485MB), run=10001-10001msec 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.703 00:18:19.703 real 0m10.816s 00:18:19.703 user 0m9.342s 00:18:19.703 sys 0m1.176s 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:18:19.703 ************************************ 00:18:19.703 END TEST fio_dif_1_default 00:18:19.703 ************************************ 00:18:19.703 19:52:13 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:18:19.703 19:52:13 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:19.703 19:52:13 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:19.703 19:52:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:18:19.703 ************************************ 00:18:19.703 START TEST fio_dif_1_multi_subsystems 00:18:19.703 ************************************ 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:19.703 bdev_null0 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:19.703 [2024-11-26 19:52:13.319396] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:19.703 bdev_null1 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:18:19.703 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:19.704 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:19.704 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:19.704 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:19.704 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:19.704 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:19.704 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:18:19.704 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:19.704 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:19.704 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:18:19.704 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:18:19.704 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:18:19.704 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:18:19.704 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:18:19.704 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:18:19.704 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:18:19.704 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:19.704 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:19.704 { 00:18:19.704 "params": { 00:18:19.704 "name": "Nvme$subsystem", 00:18:19.704 "trtype": "$TEST_TRANSPORT", 00:18:19.704 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:19.704 "adrfam": "ipv4", 00:18:19.704 "trsvcid": "$NVMF_PORT", 00:18:19.704 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:19.704 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:19.704 "hdgst": ${hdgst:-false}, 00:18:19.704 "ddgst": ${ddgst:-false} 00:18:19.704 }, 00:18:19.704 "method": "bdev_nvme_attach_controller" 00:18:19.704 } 00:18:19.704 EOF 00:18:19.704 )") 00:18:19.704 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:19.704 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:18:19.704 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:19.704 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:18:19.704 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:18:19.704 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:18:19.704 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:18:19.704 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:19.704 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:19.704 { 00:18:19.704 "params": { 00:18:19.704 "name": "Nvme$subsystem", 00:18:19.704 "trtype": "$TEST_TRANSPORT", 00:18:19.704 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:19.704 "adrfam": "ipv4", 00:18:19.704 "trsvcid": "$NVMF_PORT", 00:18:19.704 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:19.704 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:19.704 "hdgst": ${hdgst:-false}, 00:18:19.704 "ddgst": ${ddgst:-false} 00:18:19.704 }, 00:18:19.704 "method": "bdev_nvme_attach_controller" 00:18:19.704 } 00:18:19.704 EOF 00:18:19.704 )") 00:18:19.704 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:18:19.704 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:18:19.704 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:18:19.704 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:18:19.704 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:18:19.704 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:19.704 "params": { 00:18:19.704 "name": "Nvme0", 00:18:19.704 "trtype": "tcp", 00:18:19.704 "traddr": "10.0.0.3", 00:18:19.704 "adrfam": "ipv4", 00:18:19.704 "trsvcid": "4420", 00:18:19.704 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:19.704 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:19.704 "hdgst": false, 00:18:19.704 "ddgst": false 00:18:19.704 }, 00:18:19.704 "method": "bdev_nvme_attach_controller" 00:18:19.704 },{ 00:18:19.704 "params": { 00:18:19.704 "name": "Nvme1", 00:18:19.704 "trtype": "tcp", 00:18:19.704 "traddr": "10.0.0.3", 00:18:19.704 "adrfam": "ipv4", 00:18:19.704 "trsvcid": "4420", 00:18:19.704 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:19.704 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:19.704 "hdgst": false, 00:18:19.704 "ddgst": false 00:18:19.704 }, 00:18:19.704 "method": "bdev_nvme_attach_controller" 00:18:19.704 }' 00:18:19.704 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:19.704 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:19.704 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:19.704 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:19.704 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:18:19.704 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:19.704 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:19.704 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:19.704 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:19.704 19:52:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:19.704 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:18:19.704 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:18:19.704 fio-3.35 00:18:19.704 Starting 2 threads 00:18:29.712 00:18:29.712 filename0: (groupid=0, jobs=1): err= 0: pid=81733: Tue Nov 26 19:52:24 2024 00:18:29.712 read: IOPS=6943, BW=27.1MiB/s (28.4MB/s)(271MiB/10001msec) 00:18:29.712 slat (usec): min=5, max=506, avg= 8.76, stdev= 5.33 00:18:29.712 clat (usec): min=291, max=4968, avg=553.51, stdev=43.22 00:18:29.712 lat (usec): min=297, max=5001, avg=562.26, stdev=44.19 00:18:29.712 clat percentiles (usec): 00:18:29.712 | 1.00th=[ 502], 5.00th=[ 519], 10.00th=[ 529], 20.00th=[ 537], 00:18:29.712 | 30.00th=[ 537], 40.00th=[ 545], 50.00th=[ 553], 60.00th=[ 553], 00:18:29.712 | 70.00th=[ 562], 80.00th=[ 570], 90.00th=[ 586], 95.00th=[ 603], 00:18:29.712 | 99.00th=[ 635], 99.50th=[ 652], 99.90th=[ 725], 99.95th=[ 791], 00:18:29.712 | 99.99th=[ 955] 00:18:29.712 bw ( KiB/s): min=26816, max=28736, per=50.01%, avg=27784.42, stdev=521.78, samples=19 00:18:29.712 iops : min= 6704, max= 7184, avg=6946.11, stdev=130.44, samples=19 00:18:29.712 lat (usec) : 500=0.99%, 750=98.93%, 1000=0.07% 00:18:29.712 lat (msec) : 10=0.01% 00:18:29.712 cpu : usr=90.90%, sys=8.09%, ctx=90, majf=0, minf=0 00:18:29.712 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:29.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:29.712 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:29.712 issued rwts: total=69444,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:29.712 latency : target=0, window=0, percentile=100.00%, depth=4 00:18:29.712 filename1: (groupid=0, jobs=1): err= 0: pid=81734: Tue Nov 26 19:52:24 2024 00:18:29.712 read: IOPS=6944, BW=27.1MiB/s (28.4MB/s)(271MiB/10001msec) 00:18:29.712 slat (nsec): min=3023, max=49265, avg=8835.51, stdev=4737.23 00:18:29.712 clat (usec): min=284, max=7498, avg=552.20, stdev=59.99 00:18:29.712 lat (usec): min=290, max=7511, avg=561.03, stdev=60.61 00:18:29.712 clat percentiles (usec): 00:18:29.712 | 1.00th=[ 510], 5.00th=[ 523], 10.00th=[ 529], 20.00th=[ 537], 00:18:29.712 | 30.00th=[ 537], 40.00th=[ 545], 50.00th=[ 545], 60.00th=[ 553], 00:18:29.712 | 70.00th=[ 562], 80.00th=[ 570], 90.00th=[ 586], 95.00th=[ 594], 00:18:29.712 | 99.00th=[ 627], 99.50th=[ 644], 99.90th=[ 685], 99.95th=[ 709], 00:18:29.712 | 99.99th=[ 2278] 00:18:29.712 bw ( KiB/s): min=27072, max=28736, per=50.03%, avg=27792.84, stdev=507.24, samples=19 00:18:29.712 iops : min= 6768, max= 7184, avg=6948.21, stdev=126.81, samples=19 00:18:29.712 lat (usec) : 500=0.28%, 750=99.69%, 1000=0.02% 00:18:29.712 lat (msec) : 4=0.01%, 10=0.01% 00:18:29.712 cpu : usr=91.10%, sys=8.02%, ctx=196, majf=0, minf=0 00:18:29.712 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:29.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:29.712 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:29.712 issued rwts: total=69452,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:29.712 latency : target=0, window=0, percentile=100.00%, depth=4 00:18:29.712 00:18:29.712 Run status group 0 (all jobs): 00:18:29.712 READ: bw=54.2MiB/s (56.9MB/s), 27.1MiB/s-27.1MiB/s (28.4MB/s-28.4MB/s), io=543MiB (569MB), run=10001-10001msec 00:18:29.712 19:52:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:18:29.712 19:52:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:18:29.712 19:52:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:18:29.712 19:52:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:18:29.712 19:52:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:18:29.712 19:52:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:29.712 19:52:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.712 19:52:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:29.712 19:52:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.712 19:52:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:18:29.712 19:52:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.712 19:52:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:29.712 19:52:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.712 19:52:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:18:29.712 19:52:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:18:29.712 19:52:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:18:29.712 19:52:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:29.712 19:52:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.712 19:52:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:29.712 19:52:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.712 19:52:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:18:29.712 19:52:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.712 19:52:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:29.712 19:52:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.712 00:18:29.712 real 0m10.919s 00:18:29.712 user 0m18.791s 00:18:29.712 sys 0m1.790s 00:18:29.712 19:52:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:29.712 ************************************ 00:18:29.712 END TEST fio_dif_1_multi_subsystems 00:18:29.712 ************************************ 00:18:29.712 19:52:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:18:29.712 19:52:24 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:18:29.713 19:52:24 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:29.713 19:52:24 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:29.713 19:52:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:18:29.713 ************************************ 00:18:29.713 START TEST fio_dif_rand_params 00:18:29.713 ************************************ 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:29.713 bdev_null0 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:29.713 [2024-11-26 19:52:24.278171] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:29.713 { 00:18:29.713 "params": { 00:18:29.713 "name": "Nvme$subsystem", 00:18:29.713 "trtype": "$TEST_TRANSPORT", 00:18:29.713 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:29.713 "adrfam": "ipv4", 00:18:29.713 "trsvcid": "$NVMF_PORT", 00:18:29.713 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:29.713 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:29.713 "hdgst": ${hdgst:-false}, 00:18:29.713 "ddgst": ${ddgst:-false} 00:18:29.713 }, 00:18:29.713 "method": "bdev_nvme_attach_controller" 00:18:29.713 } 00:18:29.713 EOF 00:18:29.713 )") 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:29.713 "params": { 00:18:29.713 "name": "Nvme0", 00:18:29.713 "trtype": "tcp", 00:18:29.713 "traddr": "10.0.0.3", 00:18:29.713 "adrfam": "ipv4", 00:18:29.713 "trsvcid": "4420", 00:18:29.713 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:29.713 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:29.713 "hdgst": false, 00:18:29.713 "ddgst": false 00:18:29.713 }, 00:18:29.713 "method": "bdev_nvme_attach_controller" 00:18:29.713 }' 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:29.713 19:52:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:29.713 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:18:29.713 ... 00:18:29.713 fio-3.35 00:18:29.713 Starting 3 threads 00:18:34.971 00:18:34.971 filename0: (groupid=0, jobs=1): err= 0: pid=81889: Tue Nov 26 19:52:29 2024 00:18:34.971 read: IOPS=355, BW=44.4MiB/s (46.6MB/s)(222MiB/5006msec) 00:18:34.971 slat (nsec): min=5762, max=35692, avg=10208.84, stdev=5339.69 00:18:34.971 clat (usec): min=5951, max=9694, avg=8418.00, stdev=179.43 00:18:34.971 lat (usec): min=5957, max=9716, avg=8428.21, stdev=179.27 00:18:34.971 clat percentiles (usec): 00:18:34.971 | 1.00th=[ 8291], 5.00th=[ 8291], 10.00th=[ 8356], 20.00th=[ 8356], 00:18:34.971 | 30.00th=[ 8356], 40.00th=[ 8356], 50.00th=[ 8356], 60.00th=[ 8356], 00:18:34.971 | 70.00th=[ 8455], 80.00th=[ 8455], 90.00th=[ 8586], 95.00th=[ 8586], 00:18:34.971 | 99.00th=[ 8717], 99.50th=[ 8979], 99.90th=[ 9634], 99.95th=[ 9634], 00:18:34.971 | 99.99th=[ 9634] 00:18:34.971 bw ( KiB/s): min=44544, max=46080, per=33.33%, avg=45482.67, stdev=512.00, samples=9 00:18:34.971 iops : min= 348, max= 360, avg=355.33, stdev= 4.00, samples=9 00:18:34.971 lat (msec) : 10=100.00% 00:18:34.971 cpu : usr=92.75%, sys=6.81%, ctx=50, majf=0, minf=0 00:18:34.971 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:34.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.971 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.971 issued rwts: total=1779,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:34.971 latency : target=0, window=0, percentile=100.00%, depth=3 00:18:34.971 filename0: (groupid=0, jobs=1): err= 0: pid=81890: Tue Nov 26 19:52:29 2024 00:18:34.971 read: IOPS=355, BW=44.4MiB/s (46.6MB/s)(222MiB/5006msec) 00:18:34.971 slat (nsec): min=5642, max=35652, avg=10009.57, stdev=5340.07 00:18:34.971 clat (usec): min=5954, max=9710, avg=8418.14, stdev=179.53 00:18:34.971 lat (usec): min=5960, max=9734, avg=8428.15, stdev=179.39 00:18:34.971 clat percentiles (usec): 00:18:34.971 | 1.00th=[ 8291], 5.00th=[ 8291], 10.00th=[ 8356], 20.00th=[ 8356], 00:18:34.971 | 30.00th=[ 8356], 40.00th=[ 8356], 50.00th=[ 8356], 60.00th=[ 8356], 00:18:34.971 | 70.00th=[ 8455], 80.00th=[ 8455], 90.00th=[ 8586], 95.00th=[ 8586], 00:18:34.971 | 99.00th=[ 8717], 99.50th=[ 9110], 99.90th=[ 9765], 99.95th=[ 9765], 00:18:34.971 | 99.99th=[ 9765] 00:18:34.971 bw ( KiB/s): min=44544, max=46080, per=33.33%, avg=45482.67, stdev=512.00, samples=9 00:18:34.971 iops : min= 348, max= 360, avg=355.33, stdev= 4.00, samples=9 00:18:34.971 lat (msec) : 10=100.00% 00:18:34.971 cpu : usr=92.87%, sys=6.71%, ctx=8, majf=0, minf=0 00:18:34.971 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:34.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.971 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.971 issued rwts: total=1779,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:34.971 latency : target=0, window=0, percentile=100.00%, depth=3 00:18:34.971 filename0: (groupid=0, jobs=1): err= 0: pid=81891: Tue Nov 26 19:52:29 2024 00:18:34.971 read: IOPS=355, BW=44.5MiB/s (46.6MB/s)(222MiB/5001msec) 00:18:34.971 slat (nsec): min=5527, max=27643, avg=7313.63, stdev=1561.02 00:18:34.971 clat (usec): min=3127, max=9343, avg=8416.25, stdev=247.74 00:18:34.972 lat (usec): min=3135, max=9354, avg=8423.56, stdev=247.34 00:18:34.972 clat percentiles (usec): 00:18:34.972 | 1.00th=[ 8094], 5.00th=[ 8356], 10.00th=[ 8356], 20.00th=[ 8356], 00:18:34.972 | 30.00th=[ 8356], 40.00th=[ 8356], 50.00th=[ 8356], 60.00th=[ 8356], 00:18:34.972 | 70.00th=[ 8455], 80.00th=[ 8586], 90.00th=[ 8586], 95.00th=[ 8586], 00:18:34.972 | 99.00th=[ 8717], 99.50th=[ 8848], 99.90th=[ 9372], 99.95th=[ 9372], 00:18:34.972 | 99.99th=[ 9372] 00:18:34.972 bw ( KiB/s): min=44544, max=46080, per=33.39%, avg=45568.00, stdev=543.06, samples=9 00:18:34.972 iops : min= 348, max= 360, avg=356.00, stdev= 4.24, samples=9 00:18:34.972 lat (msec) : 4=0.17%, 10=99.83% 00:18:34.972 cpu : usr=93.00%, sys=6.56%, ctx=20, majf=0, minf=0 00:18:34.972 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:34.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.972 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.972 issued rwts: total=1779,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:34.972 latency : target=0, window=0, percentile=100.00%, depth=3 00:18:34.972 00:18:34.972 Run status group 0 (all jobs): 00:18:34.972 READ: bw=133MiB/s (140MB/s), 44.4MiB/s-44.5MiB/s (46.6MB/s-46.6MB/s), io=667MiB (700MB), run=5001-5006msec 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:34.972 bdev_null0 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:34.972 [2024-11-26 19:52:30.099827] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:34.972 bdev_null1 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:34.972 bdev_null2 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:34.972 19:52:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:34.972 { 00:18:34.972 "params": { 00:18:34.972 "name": "Nvme$subsystem", 00:18:34.972 "trtype": "$TEST_TRANSPORT", 00:18:34.972 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:34.972 "adrfam": "ipv4", 00:18:34.972 "trsvcid": "$NVMF_PORT", 00:18:34.972 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:34.972 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:34.972 "hdgst": ${hdgst:-false}, 00:18:34.972 "ddgst": ${ddgst:-false} 00:18:34.973 }, 00:18:34.973 "method": "bdev_nvme_attach_controller" 00:18:34.973 } 00:18:34.973 EOF 00:18:34.973 )") 00:18:34.973 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:34.973 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:18:34.973 19:52:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:18:34.973 19:52:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:18:34.973 19:52:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:18:34.973 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:34.973 19:52:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:18:34.973 19:52:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:18:34.973 19:52:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:18:34.973 19:52:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:18:34.973 19:52:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:34.973 19:52:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:34.973 { 00:18:34.973 "params": { 00:18:34.973 "name": "Nvme$subsystem", 00:18:34.973 "trtype": "$TEST_TRANSPORT", 00:18:34.973 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:34.973 "adrfam": "ipv4", 00:18:34.973 "trsvcid": "$NVMF_PORT", 00:18:34.973 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:34.973 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:34.973 "hdgst": ${hdgst:-false}, 00:18:34.973 "ddgst": ${ddgst:-false} 00:18:34.973 }, 00:18:34.973 "method": "bdev_nvme_attach_controller" 00:18:34.973 } 00:18:34.973 EOF 00:18:34.973 )") 00:18:34.973 19:52:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:18:34.973 19:52:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:18:34.973 19:52:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:18:34.973 19:52:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:34.973 19:52:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:34.973 { 00:18:34.973 "params": { 00:18:34.973 "name": "Nvme$subsystem", 00:18:34.973 "trtype": "$TEST_TRANSPORT", 00:18:34.973 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:34.973 "adrfam": "ipv4", 00:18:34.973 "trsvcid": "$NVMF_PORT", 00:18:34.973 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:34.973 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:34.973 "hdgst": ${hdgst:-false}, 00:18:34.973 "ddgst": ${ddgst:-false} 00:18:34.973 }, 00:18:34.973 "method": "bdev_nvme_attach_controller" 00:18:34.973 } 00:18:34.973 EOF 00:18:34.973 )") 00:18:34.973 19:52:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:18:34.973 19:52:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:18:34.973 19:52:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:18:34.973 19:52:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:34.973 "params": { 00:18:34.973 "name": "Nvme0", 00:18:34.973 "trtype": "tcp", 00:18:34.973 "traddr": "10.0.0.3", 00:18:34.973 "adrfam": "ipv4", 00:18:34.973 "trsvcid": "4420", 00:18:34.973 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:34.973 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:34.973 "hdgst": false, 00:18:34.973 "ddgst": false 00:18:34.973 }, 00:18:34.973 "method": "bdev_nvme_attach_controller" 00:18:34.973 },{ 00:18:34.973 "params": { 00:18:34.973 "name": "Nvme1", 00:18:34.973 "trtype": "tcp", 00:18:34.973 "traddr": "10.0.0.3", 00:18:34.973 "adrfam": "ipv4", 00:18:34.973 "trsvcid": "4420", 00:18:34.973 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:34.973 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:34.973 "hdgst": false, 00:18:34.973 "ddgst": false 00:18:34.973 }, 00:18:34.973 "method": "bdev_nvme_attach_controller" 00:18:34.973 },{ 00:18:34.973 "params": { 00:18:34.973 "name": "Nvme2", 00:18:34.973 "trtype": "tcp", 00:18:34.973 "traddr": "10.0.0.3", 00:18:34.973 "adrfam": "ipv4", 00:18:34.973 "trsvcid": "4420", 00:18:34.973 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:34.973 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:34.973 "hdgst": false, 00:18:34.973 "ddgst": false 00:18:34.973 }, 00:18:34.973 "method": "bdev_nvme_attach_controller" 00:18:34.973 }' 00:18:34.973 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:34.973 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:34.973 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:34.973 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:18:34.973 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:34.973 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:35.230 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:35.230 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:35.230 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:35.230 19:52:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:35.230 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:18:35.230 ... 00:18:35.230 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:18:35.230 ... 00:18:35.230 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:18:35.230 ... 00:18:35.230 fio-3.35 00:18:35.230 Starting 24 threads 00:18:47.427 00:18:47.427 filename0: (groupid=0, jobs=1): err= 0: pid=81990: Tue Nov 26 19:52:41 2024 00:18:47.427 read: IOPS=247, BW=990KiB/s (1013kB/s)(9908KiB/10011msec) 00:18:47.427 slat (usec): min=2, max=8012, avg=17.05, stdev=193.15 00:18:47.427 clat (usec): min=13613, max=98436, avg=64569.73, stdev=16425.23 00:18:47.427 lat (usec): min=13620, max=98455, avg=64586.78, stdev=16432.99 00:18:47.427 clat percentiles (usec): 00:18:47.427 | 1.00th=[21890], 5.00th=[35914], 10.00th=[47449], 20.00th=[47973], 00:18:47.427 | 30.00th=[53740], 40.00th=[60031], 50.00th=[68682], 60.00th=[71828], 00:18:47.427 | 70.00th=[73925], 80.00th=[81265], 90.00th=[84411], 95.00th=[86508], 00:18:47.427 | 99.00th=[93848], 99.50th=[94897], 99.90th=[98042], 99.95th=[98042], 00:18:47.427 | 99.99th=[98042] 00:18:47.427 bw ( KiB/s): min= 856, max= 1328, per=4.29%, avg=981.05, stdev=118.96, samples=19 00:18:47.427 iops : min= 214, max= 332, avg=245.26, stdev=29.74, samples=19 00:18:47.427 lat (msec) : 20=0.40%, 50=24.75%, 100=74.85% 00:18:47.427 cpu : usr=38.10%, sys=1.25%, ctx=1031, majf=0, minf=9 00:18:47.427 IO depths : 1=0.1%, 2=1.0%, 4=3.5%, 8=80.0%, 16=15.4%, 32=0.0%, >=64=0.0% 00:18:47.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.427 complete : 0=0.0%, 4=87.9%, 8=11.4%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.427 issued rwts: total=2477,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.427 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:47.427 filename0: (groupid=0, jobs=1): err= 0: pid=81991: Tue Nov 26 19:52:41 2024 00:18:47.427 read: IOPS=244, BW=976KiB/s (1000kB/s)(9796KiB/10032msec) 00:18:47.427 slat (usec): min=3, max=8018, avg=23.31, stdev=280.47 00:18:47.427 clat (msec): min=8, max=120, avg=65.36, stdev=19.00 00:18:47.427 lat (msec): min=8, max=120, avg=65.38, stdev=18.99 00:18:47.427 clat percentiles (msec): 00:18:47.427 | 1.00th=[ 16], 5.00th=[ 26], 10.00th=[ 40], 20.00th=[ 49], 00:18:47.427 | 30.00th=[ 57], 40.00th=[ 65], 50.00th=[ 72], 60.00th=[ 72], 00:18:47.427 | 70.00th=[ 78], 80.00th=[ 83], 90.00th=[ 86], 95.00th=[ 90], 00:18:47.427 | 99.00th=[ 99], 99.50th=[ 99], 99.90th=[ 118], 99.95th=[ 121], 00:18:47.427 | 99.99th=[ 121] 00:18:47.427 bw ( KiB/s): min= 824, max= 1924, per=4.25%, avg=972.20, stdev=247.43, samples=20 00:18:47.427 iops : min= 206, max= 481, avg=243.05, stdev=61.86, samples=20 00:18:47.427 lat (msec) : 10=0.12%, 20=3.39%, 50=19.15%, 100=76.97%, 250=0.37% 00:18:47.427 cpu : usr=39.15%, sys=1.27%, ctx=1122, majf=0, minf=9 00:18:47.427 IO depths : 1=0.1%, 2=0.2%, 4=0.9%, 8=82.0%, 16=16.7%, 32=0.0%, >=64=0.0% 00:18:47.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.427 complete : 0=0.0%, 4=87.9%, 8=11.9%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.427 issued rwts: total=2449,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.427 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:47.427 filename0: (groupid=0, jobs=1): err= 0: pid=81992: Tue Nov 26 19:52:41 2024 00:18:47.427 read: IOPS=242, BW=969KiB/s (992kB/s)(9724KiB/10034msec) 00:18:47.427 slat (usec): min=4, max=8014, avg=20.37, stdev=215.14 00:18:47.427 clat (msec): min=11, max=119, avg=65.88, stdev=18.32 00:18:47.427 lat (msec): min=11, max=119, avg=65.90, stdev=18.32 00:18:47.427 clat percentiles (msec): 00:18:47.427 | 1.00th=[ 17], 5.00th=[ 27], 10.00th=[ 45], 20.00th=[ 50], 00:18:47.427 | 30.00th=[ 58], 40.00th=[ 64], 50.00th=[ 71], 60.00th=[ 73], 00:18:47.427 | 70.00th=[ 79], 80.00th=[ 83], 90.00th=[ 85], 95.00th=[ 89], 00:18:47.427 | 99.00th=[ 96], 99.50th=[ 96], 99.90th=[ 111], 99.95th=[ 115], 00:18:47.427 | 99.99th=[ 121] 00:18:47.427 bw ( KiB/s): min= 816, max= 1891, per=4.22%, avg=966.15, stdev=231.22, samples=20 00:18:47.427 iops : min= 204, max= 472, avg=241.50, stdev=57.65, samples=20 00:18:47.427 lat (msec) : 20=3.74%, 50=17.77%, 100=78.07%, 250=0.41% 00:18:47.427 cpu : usr=40.29%, sys=1.40%, ctx=1147, majf=0, minf=9 00:18:47.427 IO depths : 1=0.1%, 2=1.1%, 4=4.5%, 8=78.3%, 16=16.0%, 32=0.0%, >=64=0.0% 00:18:47.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.427 complete : 0=0.0%, 4=88.8%, 8=10.2%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.427 issued rwts: total=2431,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.427 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:47.427 filename0: (groupid=0, jobs=1): err= 0: pid=81993: Tue Nov 26 19:52:41 2024 00:18:47.427 read: IOPS=239, BW=957KiB/s (980kB/s)(9580KiB/10013msec) 00:18:47.427 slat (usec): min=3, max=4028, avg=17.68, stdev=150.17 00:18:47.427 clat (msec): min=13, max=108, avg=66.75, stdev=16.41 00:18:47.427 lat (msec): min=13, max=108, avg=66.76, stdev=16.41 00:18:47.427 clat percentiles (msec): 00:18:47.427 | 1.00th=[ 24], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 50], 00:18:47.427 | 30.00th=[ 56], 40.00th=[ 63], 50.00th=[ 71], 60.00th=[ 72], 00:18:47.427 | 70.00th=[ 80], 80.00th=[ 83], 90.00th=[ 87], 95.00th=[ 90], 00:18:47.428 | 99.00th=[ 100], 99.50th=[ 100], 99.90th=[ 102], 99.95th=[ 109], 00:18:47.428 | 99.99th=[ 109] 00:18:47.428 bw ( KiB/s): min= 768, max= 1392, per=4.15%, avg=949.05, stdev=122.35, samples=19 00:18:47.428 iops : min= 192, max= 348, avg=237.26, stdev=30.59, samples=19 00:18:47.428 lat (msec) : 20=0.33%, 50=20.38%, 100=79.16%, 250=0.13% 00:18:47.428 cpu : usr=43.66%, sys=1.44%, ctx=1364, majf=0, minf=9 00:18:47.428 IO depths : 1=0.1%, 2=1.8%, 4=7.1%, 8=76.0%, 16=15.1%, 32=0.0%, >=64=0.0% 00:18:47.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.428 complete : 0=0.0%, 4=89.1%, 8=9.4%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.428 issued rwts: total=2395,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.428 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:47.428 filename0: (groupid=0, jobs=1): err= 0: pid=81994: Tue Nov 26 19:52:41 2024 00:18:47.428 read: IOPS=236, BW=945KiB/s (968kB/s)(9480KiB/10032msec) 00:18:47.428 slat (nsec): min=2837, max=48072, avg=9178.40, stdev=5990.97 00:18:47.428 clat (msec): min=11, max=111, avg=67.63, stdev=17.48 00:18:47.428 lat (msec): min=11, max=111, avg=67.64, stdev=17.48 00:18:47.428 clat percentiles (msec): 00:18:47.428 | 1.00th=[ 23], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 50], 00:18:47.428 | 30.00th=[ 59], 40.00th=[ 63], 50.00th=[ 72], 60.00th=[ 72], 00:18:47.428 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 87], 95.00th=[ 94], 00:18:47.428 | 99.00th=[ 101], 99.50th=[ 108], 99.90th=[ 108], 99.95th=[ 108], 00:18:47.428 | 99.99th=[ 112] 00:18:47.428 bw ( KiB/s): min= 864, max= 1648, per=4.11%, avg=941.20, stdev=169.23, samples=20 00:18:47.428 iops : min= 216, max= 412, avg=235.30, stdev=42.31, samples=20 00:18:47.428 lat (msec) : 20=0.76%, 50=20.72%, 100=77.64%, 250=0.89% 00:18:47.428 cpu : usr=35.62%, sys=1.09%, ctx=946, majf=0, minf=9 00:18:47.428 IO depths : 1=0.1%, 2=1.6%, 4=6.0%, 8=76.8%, 16=15.6%, 32=0.0%, >=64=0.0% 00:18:47.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.428 complete : 0=0.0%, 4=89.1%, 8=9.6%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.428 issued rwts: total=2370,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.428 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:47.428 filename0: (groupid=0, jobs=1): err= 0: pid=81995: Tue Nov 26 19:52:41 2024 00:18:47.428 read: IOPS=244, BW=978KiB/s (1001kB/s)(9796KiB/10020msec) 00:18:47.428 slat (usec): min=3, max=8028, avg=25.88, stdev=308.99 00:18:47.428 clat (usec): min=24948, max=98943, avg=65292.32, stdev=15961.15 00:18:47.428 lat (usec): min=24958, max=98950, avg=65318.20, stdev=15949.19 00:18:47.428 clat percentiles (usec): 00:18:47.428 | 1.00th=[31589], 5.00th=[37487], 10.00th=[46400], 20.00th=[49021], 00:18:47.428 | 30.00th=[55313], 40.00th=[60031], 50.00th=[68682], 60.00th=[71828], 00:18:47.428 | 70.00th=[74974], 80.00th=[81265], 90.00th=[84411], 95.00th=[87557], 00:18:47.428 | 99.00th=[94897], 99.50th=[95945], 99.90th=[95945], 99.95th=[95945], 00:18:47.428 | 99.99th=[99091] 00:18:47.428 bw ( KiB/s): min= 840, max= 1392, per=4.27%, avg=976.00, stdev=124.88, samples=19 00:18:47.428 iops : min= 210, max= 348, avg=244.00, stdev=31.22, samples=19 00:18:47.428 lat (msec) : 50=22.29%, 100=77.71% 00:18:47.428 cpu : usr=38.20%, sys=1.10%, ctx=1183, majf=0, minf=9 00:18:47.428 IO depths : 1=0.2%, 2=1.3%, 4=4.5%, 8=78.7%, 16=15.3%, 32=0.0%, >=64=0.0% 00:18:47.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.428 complete : 0=0.0%, 4=88.3%, 8=10.7%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.428 issued rwts: total=2449,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.428 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:47.428 filename0: (groupid=0, jobs=1): err= 0: pid=81996: Tue Nov 26 19:52:41 2024 00:18:47.428 read: IOPS=229, BW=919KiB/s (941kB/s)(9220KiB/10036msec) 00:18:47.428 slat (usec): min=5, max=3333, avg=11.97, stdev=69.56 00:18:47.428 clat (msec): min=7, max=119, avg=69.52, stdev=19.91 00:18:47.428 lat (msec): min=7, max=119, avg=69.53, stdev=19.91 00:18:47.428 clat percentiles (msec): 00:18:47.428 | 1.00th=[ 10], 5.00th=[ 21], 10.00th=[ 46], 20.00th=[ 56], 00:18:47.428 | 30.00th=[ 67], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 78], 00:18:47.428 | 70.00th=[ 84], 80.00th=[ 85], 90.00th=[ 89], 95.00th=[ 96], 00:18:47.428 | 99.00th=[ 102], 99.50th=[ 108], 99.90th=[ 120], 99.95th=[ 121], 00:18:47.428 | 99.99th=[ 121] 00:18:47.428 bw ( KiB/s): min= 784, max= 2160, per=4.00%, avg=915.60, stdev=298.06, samples=20 00:18:47.428 iops : min= 196, max= 540, avg=228.90, stdev=74.52, samples=20 00:18:47.428 lat (msec) : 10=2.08%, 20=2.69%, 50=11.15%, 100=82.56%, 250=1.52% 00:18:47.428 cpu : usr=34.68%, sys=0.93%, ctx=1518, majf=0, minf=9 00:18:47.428 IO depths : 1=0.1%, 2=1.8%, 4=7.2%, 8=74.5%, 16=16.5%, 32=0.0%, >=64=0.0% 00:18:47.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.428 complete : 0=0.0%, 4=90.2%, 8=8.3%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.428 issued rwts: total=2305,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.428 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:47.428 filename0: (groupid=0, jobs=1): err= 0: pid=81997: Tue Nov 26 19:52:41 2024 00:18:47.428 read: IOPS=225, BW=901KiB/s (922kB/s)(9032KiB/10026msec) 00:18:47.428 slat (usec): min=2, max=8016, avg=20.00, stdev=219.27 00:18:47.428 clat (msec): min=18, max=122, avg=70.89, stdev=18.52 00:18:47.428 lat (msec): min=18, max=122, avg=70.91, stdev=18.53 00:18:47.428 clat percentiles (msec): 00:18:47.428 | 1.00th=[ 22], 5.00th=[ 42], 10.00th=[ 48], 20.00th=[ 54], 00:18:47.428 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 74], 60.00th=[ 79], 00:18:47.428 | 70.00th=[ 82], 80.00th=[ 85], 90.00th=[ 89], 95.00th=[ 100], 00:18:47.428 | 99.00th=[ 122], 99.50th=[ 124], 99.90th=[ 124], 99.95th=[ 124], 00:18:47.428 | 99.99th=[ 124] 00:18:47.428 bw ( KiB/s): min= 764, max= 1520, per=3.92%, avg=896.60, stdev=174.91, samples=20 00:18:47.428 iops : min= 191, max= 380, avg=224.15, stdev=43.73, samples=20 00:18:47.428 lat (msec) : 20=0.80%, 50=16.47%, 100=78.21%, 250=4.52% 00:18:47.428 cpu : usr=43.78%, sys=1.36%, ctx=1365, majf=0, minf=9 00:18:47.428 IO depths : 1=0.1%, 2=3.8%, 4=15.0%, 8=67.3%, 16=13.8%, 32=0.0%, >=64=0.0% 00:18:47.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.428 complete : 0=0.0%, 4=91.3%, 8=5.4%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.428 issued rwts: total=2258,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.428 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:47.428 filename1: (groupid=0, jobs=1): err= 0: pid=81998: Tue Nov 26 19:52:41 2024 00:18:47.428 read: IOPS=243, BW=973KiB/s (996kB/s)(9760KiB/10035msec) 00:18:47.428 slat (usec): min=5, max=4014, avg=13.05, stdev=121.85 00:18:47.428 clat (msec): min=7, max=109, avg=65.68, stdev=19.76 00:18:47.428 lat (msec): min=7, max=109, avg=65.69, stdev=19.76 00:18:47.428 clat percentiles (msec): 00:18:47.428 | 1.00th=[ 9], 5.00th=[ 20], 10.00th=[ 45], 20.00th=[ 50], 00:18:47.428 | 30.00th=[ 57], 40.00th=[ 63], 50.00th=[ 71], 60.00th=[ 73], 00:18:47.428 | 70.00th=[ 79], 80.00th=[ 83], 90.00th=[ 87], 95.00th=[ 92], 00:18:47.428 | 99.00th=[ 104], 99.50th=[ 109], 99.90th=[ 109], 99.95th=[ 109], 00:18:47.428 | 99.99th=[ 110] 00:18:47.428 bw ( KiB/s): min= 808, max= 2160, per=4.24%, avg=969.60, stdev=283.97, samples=20 00:18:47.428 iops : min= 202, max= 540, avg=242.40, stdev=70.99, samples=20 00:18:47.428 lat (msec) : 10=1.52%, 20=3.65%, 50=15.41%, 100=78.36%, 250=1.07% 00:18:47.428 cpu : usr=44.36%, sys=1.58%, ctx=1440, majf=0, minf=9 00:18:47.428 IO depths : 1=0.1%, 2=1.8%, 4=7.3%, 8=75.5%, 16=15.3%, 32=0.0%, >=64=0.0% 00:18:47.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.428 complete : 0=0.0%, 4=89.3%, 8=9.1%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.428 issued rwts: total=2440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.428 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:47.428 filename1: (groupid=0, jobs=1): err= 0: pid=81999: Tue Nov 26 19:52:41 2024 00:18:47.428 read: IOPS=235, BW=940KiB/s (963kB/s)(9420KiB/10016msec) 00:18:47.428 slat (usec): min=2, max=4020, avg=11.71, stdev=82.91 00:18:47.428 clat (msec): min=16, max=108, avg=67.98, stdev=15.92 00:18:47.428 lat (msec): min=16, max=108, avg=67.99, stdev=15.92 00:18:47.428 clat percentiles (msec): 00:18:47.428 | 1.00th=[ 29], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 51], 00:18:47.428 | 30.00th=[ 59], 40.00th=[ 64], 50.00th=[ 72], 60.00th=[ 72], 00:18:47.428 | 70.00th=[ 81], 80.00th=[ 84], 90.00th=[ 86], 95.00th=[ 89], 00:18:47.428 | 99.00th=[ 99], 99.50th=[ 108], 99.90th=[ 108], 99.95th=[ 108], 00:18:47.428 | 99.99th=[ 109] 00:18:47.428 bw ( KiB/s): min= 816, max= 1264, per=4.09%, avg=935.16, stdev=104.41, samples=19 00:18:47.428 iops : min= 204, max= 316, avg=233.79, stdev=26.10, samples=19 00:18:47.428 lat (msec) : 20=0.13%, 50=18.34%, 100=80.72%, 250=0.81% 00:18:47.428 cpu : usr=37.09%, sys=1.66%, ctx=1003, majf=0, minf=9 00:18:47.428 IO depths : 1=0.3%, 2=1.4%, 4=4.9%, 8=77.8%, 16=15.6%, 32=0.0%, >=64=0.0% 00:18:47.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.428 complete : 0=0.0%, 4=88.8%, 8=10.2%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.428 issued rwts: total=2355,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.428 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:47.428 filename1: (groupid=0, jobs=1): err= 0: pid=82000: Tue Nov 26 19:52:41 2024 00:18:47.428 read: IOPS=213, BW=853KiB/s (874kB/s)(8532KiB/10001msec) 00:18:47.428 slat (nsec): min=2925, max=53546, avg=10553.08, stdev=6946.98 00:18:47.428 clat (msec): min=2, max=122, avg=74.94, stdev=17.64 00:18:47.428 lat (msec): min=2, max=122, avg=74.95, stdev=17.64 00:18:47.428 clat percentiles (msec): 00:18:47.428 | 1.00th=[ 11], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 67], 00:18:47.428 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 78], 60.00th=[ 81], 00:18:47.428 | 70.00th=[ 84], 80.00th=[ 86], 90.00th=[ 92], 95.00th=[ 105], 00:18:47.428 | 99.00th=[ 116], 99.50th=[ 117], 99.90th=[ 120], 99.95th=[ 124], 00:18:47.428 | 99.99th=[ 124] 00:18:47.428 bw ( KiB/s): min= 656, max= 1154, per=3.62%, avg=829.58, stdev=132.56, samples=19 00:18:47.428 iops : min= 164, max= 288, avg=207.37, stdev=33.07, samples=19 00:18:47.428 lat (msec) : 4=0.28%, 10=0.61%, 20=0.61%, 50=9.61%, 100=82.61% 00:18:47.428 lat (msec) : 250=6.28% 00:18:47.428 cpu : usr=44.05%, sys=1.66%, ctx=1398, majf=0, minf=9 00:18:47.428 IO depths : 1=0.1%, 2=5.8%, 4=22.8%, 8=58.5%, 16=12.7%, 32=0.0%, >=64=0.0% 00:18:47.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.428 complete : 0=0.0%, 4=93.8%, 8=1.2%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.428 issued rwts: total=2133,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.428 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:47.428 filename1: (groupid=0, jobs=1): err= 0: pid=82001: Tue Nov 26 19:52:41 2024 00:18:47.428 read: IOPS=235, BW=943KiB/s (965kB/s)(9452KiB/10027msec) 00:18:47.429 slat (usec): min=2, max=8027, avg=17.54, stdev=247.18 00:18:47.429 clat (msec): min=8, max=120, avg=67.76, stdev=18.32 00:18:47.429 lat (msec): min=8, max=120, avg=67.78, stdev=18.32 00:18:47.429 clat percentiles (msec): 00:18:47.429 | 1.00th=[ 11], 5.00th=[ 26], 10.00th=[ 48], 20.00th=[ 50], 00:18:47.429 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 73], 00:18:47.429 | 70.00th=[ 81], 80.00th=[ 84], 90.00th=[ 86], 95.00th=[ 93], 00:18:47.429 | 99.00th=[ 96], 99.50th=[ 100], 99.90th=[ 109], 99.95th=[ 112], 00:18:47.429 | 99.99th=[ 121] 00:18:47.429 bw ( KiB/s): min= 784, max= 1968, per=4.11%, avg=940.00, stdev=248.79, samples=20 00:18:47.429 iops : min= 196, max= 492, avg=235.00, stdev=62.20, samples=20 00:18:47.429 lat (msec) : 10=0.80%, 20=1.40%, 50=17.94%, 100=79.52%, 250=0.34% 00:18:47.429 cpu : usr=35.05%, sys=1.19%, ctx=986, majf=0, minf=9 00:18:47.429 IO depths : 1=0.1%, 2=2.2%, 4=9.0%, 8=73.4%, 16=15.3%, 32=0.0%, >=64=0.0% 00:18:47.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.429 complete : 0=0.0%, 4=90.1%, 8=8.0%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.429 issued rwts: total=2363,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.429 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:47.429 filename1: (groupid=0, jobs=1): err= 0: pid=82002: Tue Nov 26 19:52:41 2024 00:18:47.429 read: IOPS=242, BW=970KiB/s (994kB/s)(9708KiB/10005msec) 00:18:47.429 slat (usec): min=3, max=4026, avg=14.00, stdev=81.98 00:18:47.429 clat (msec): min=4, max=107, avg=65.88, stdev=16.59 00:18:47.429 lat (msec): min=4, max=107, avg=65.89, stdev=16.59 00:18:47.429 clat percentiles (msec): 00:18:47.429 | 1.00th=[ 17], 5.00th=[ 42], 10.00th=[ 48], 20.00th=[ 48], 00:18:47.429 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 70], 60.00th=[ 72], 00:18:47.429 | 70.00th=[ 78], 80.00th=[ 82], 90.00th=[ 86], 95.00th=[ 88], 00:18:47.429 | 99.00th=[ 96], 99.50th=[ 103], 99.90th=[ 105], 99.95th=[ 108], 00:18:47.429 | 99.99th=[ 108] 00:18:47.429 bw ( KiB/s): min= 784, max= 1266, per=4.18%, avg=956.32, stdev=106.20, samples=19 00:18:47.429 iops : min= 196, max= 316, avg=239.05, stdev=26.47, samples=19 00:18:47.429 lat (msec) : 10=0.62%, 20=0.49%, 50=21.51%, 100=76.68%, 250=0.70% 00:18:47.429 cpu : usr=41.93%, sys=1.36%, ctx=1302, majf=0, minf=9 00:18:47.429 IO depths : 1=0.1%, 2=1.9%, 4=7.4%, 8=75.7%, 16=14.8%, 32=0.0%, >=64=0.0% 00:18:47.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.429 complete : 0=0.0%, 4=89.0%, 8=9.4%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.429 issued rwts: total=2427,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.429 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:47.429 filename1: (groupid=0, jobs=1): err= 0: pid=82003: Tue Nov 26 19:52:41 2024 00:18:47.429 read: IOPS=228, BW=916KiB/s (938kB/s)(9192KiB/10037msec) 00:18:47.429 slat (usec): min=5, max=12016, avg=21.58, stdev=283.03 00:18:47.429 clat (msec): min=8, max=123, avg=69.66, stdev=20.00 00:18:47.429 lat (msec): min=8, max=123, avg=69.68, stdev=19.98 00:18:47.429 clat percentiles (msec): 00:18:47.429 | 1.00th=[ 10], 5.00th=[ 28], 10.00th=[ 48], 20.00th=[ 52], 00:18:47.429 | 30.00th=[ 64], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 79], 00:18:47.429 | 70.00th=[ 82], 80.00th=[ 85], 90.00th=[ 89], 95.00th=[ 96], 00:18:47.429 | 99.00th=[ 107], 99.50th=[ 110], 99.90th=[ 121], 99.95th=[ 124], 00:18:47.429 | 99.99th=[ 124] 00:18:47.429 bw ( KiB/s): min= 760, max= 2000, per=3.99%, avg=912.80, stdev=270.89, samples=20 00:18:47.429 iops : min= 190, max= 500, avg=228.20, stdev=67.72, samples=20 00:18:47.429 lat (msec) : 10=1.31%, 20=3.22%, 50=14.10%, 100=77.50%, 250=3.87% 00:18:47.429 cpu : usr=41.45%, sys=1.30%, ctx=1153, majf=0, minf=9 00:18:47.429 IO depths : 1=0.2%, 2=3.6%, 4=13.5%, 8=68.5%, 16=14.2%, 32=0.0%, >=64=0.0% 00:18:47.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.429 complete : 0=0.0%, 4=91.2%, 8=5.9%, 16=3.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.429 issued rwts: total=2298,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.429 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:47.429 filename1: (groupid=0, jobs=1): err= 0: pid=82004: Tue Nov 26 19:52:41 2024 00:18:47.429 read: IOPS=249, BW=999KiB/s (1022kB/s)(9.77MiB/10023msec) 00:18:47.429 slat (usec): min=3, max=4022, avg=17.66, stdev=131.80 00:18:47.429 clat (msec): min=13, max=103, avg=63.97, stdev=17.75 00:18:47.429 lat (msec): min=13, max=103, avg=63.99, stdev=17.75 00:18:47.429 clat percentiles (msec): 00:18:47.429 | 1.00th=[ 16], 5.00th=[ 33], 10.00th=[ 44], 20.00th=[ 50], 00:18:47.429 | 30.00th=[ 55], 40.00th=[ 59], 50.00th=[ 67], 60.00th=[ 72], 00:18:47.429 | 70.00th=[ 75], 80.00th=[ 82], 90.00th=[ 85], 95.00th=[ 88], 00:18:47.429 | 99.00th=[ 94], 99.50th=[ 96], 99.90th=[ 99], 99.95th=[ 99], 00:18:47.429 | 99.99th=[ 104] 00:18:47.429 bw ( KiB/s): min= 896, max= 1761, per=4.35%, avg=995.65, stdev=195.31, samples=20 00:18:47.429 iops : min= 224, max= 440, avg=248.90, stdev=48.78, samples=20 00:18:47.429 lat (msec) : 20=2.64%, 50=19.54%, 100=77.78%, 250=0.04% 00:18:47.429 cpu : usr=43.87%, sys=1.38%, ctx=1462, majf=0, minf=9 00:18:47.429 IO depths : 1=0.1%, 2=1.2%, 4=4.6%, 8=78.8%, 16=15.4%, 32=0.0%, >=64=0.0% 00:18:47.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.429 complete : 0=0.0%, 4=88.3%, 8=10.7%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.429 issued rwts: total=2502,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.429 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:47.429 filename1: (groupid=0, jobs=1): err= 0: pid=82005: Tue Nov 26 19:52:41 2024 00:18:47.429 read: IOPS=243, BW=975KiB/s (999kB/s)(9760KiB/10008msec) 00:18:47.429 slat (usec): min=3, max=4016, avg=14.65, stdev=91.90 00:18:47.429 clat (msec): min=13, max=128, avg=65.54, stdev=16.83 00:18:47.429 lat (msec): min=13, max=128, avg=65.56, stdev=16.83 00:18:47.429 clat percentiles (msec): 00:18:47.429 | 1.00th=[ 24], 5.00th=[ 39], 10.00th=[ 46], 20.00th=[ 50], 00:18:47.429 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 69], 60.00th=[ 72], 00:18:47.429 | 70.00th=[ 77], 80.00th=[ 83], 90.00th=[ 85], 95.00th=[ 88], 00:18:47.429 | 99.00th=[ 96], 99.50th=[ 105], 99.90th=[ 109], 99.95th=[ 129], 00:18:47.429 | 99.99th=[ 129] 00:18:47.429 bw ( KiB/s): min= 840, max= 1296, per=4.24%, avg=970.11, stdev=121.15, samples=19 00:18:47.429 iops : min= 210, max= 324, avg=242.53, stdev=30.29, samples=19 00:18:47.429 lat (msec) : 20=0.25%, 50=20.37%, 100=78.69%, 250=0.70% 00:18:47.429 cpu : usr=40.48%, sys=1.07%, ctx=1146, majf=0, minf=9 00:18:47.429 IO depths : 1=0.2%, 2=1.2%, 4=4.4%, 8=78.9%, 16=15.4%, 32=0.0%, >=64=0.0% 00:18:47.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.429 complete : 0=0.0%, 4=88.2%, 8=10.8%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.429 issued rwts: total=2440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.429 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:47.429 filename2: (groupid=0, jobs=1): err= 0: pid=82006: Tue Nov 26 19:52:41 2024 00:18:47.429 read: IOPS=257, BW=1032KiB/s (1057kB/s)(10.1MiB/10068msec) 00:18:47.429 slat (usec): min=3, max=11029, avg=19.54, stdev=309.95 00:18:47.429 clat (usec): min=1141, max=119974, avg=61803.84, stdev=24809.33 00:18:47.429 lat (usec): min=1148, max=119980, avg=61823.38, stdev=24808.83 00:18:47.429 clat percentiles (msec): 00:18:47.429 | 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 21], 20.00th=[ 47], 00:18:47.429 | 30.00th=[ 53], 40.00th=[ 61], 50.00th=[ 71], 60.00th=[ 73], 00:18:47.429 | 70.00th=[ 78], 80.00th=[ 83], 90.00th=[ 87], 95.00th=[ 91], 00:18:47.429 | 99.00th=[ 101], 99.50th=[ 106], 99.90th=[ 116], 99.95th=[ 117], 00:18:47.429 | 99.99th=[ 121] 00:18:47.429 bw ( KiB/s): min= 800, max= 3334, per=4.51%, avg=1032.70, stdev=550.66, samples=20 00:18:47.429 iops : min= 200, max= 833, avg=258.15, stdev=137.55, samples=20 00:18:47.429 lat (msec) : 2=0.62%, 4=4.31%, 10=2.77%, 20=2.23%, 50=18.21% 00:18:47.429 lat (msec) : 100=71.04%, 250=0.81% 00:18:47.429 cpu : usr=36.14%, sys=1.33%, ctx=1121, majf=0, minf=0 00:18:47.429 IO depths : 1=0.5%, 2=1.7%, 4=5.4%, 8=76.8%, 16=15.6%, 32=0.0%, >=64=0.0% 00:18:47.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.429 complete : 0=0.0%, 4=89.1%, 8=9.7%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.429 issued rwts: total=2597,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.429 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:47.429 filename2: (groupid=0, jobs=1): err= 0: pid=82007: Tue Nov 26 19:52:41 2024 00:18:47.429 read: IOPS=243, BW=973KiB/s (996kB/s)(9752KiB/10024msec) 00:18:47.429 slat (usec): min=3, max=4024, avg=10.61, stdev=81.52 00:18:47.429 clat (msec): min=13, max=114, avg=65.70, stdev=17.53 00:18:47.429 lat (msec): min=13, max=114, avg=65.71, stdev=17.53 00:18:47.429 clat percentiles (msec): 00:18:47.429 | 1.00th=[ 23], 5.00th=[ 35], 10.00th=[ 47], 20.00th=[ 48], 00:18:47.429 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 72], 60.00th=[ 72], 00:18:47.429 | 70.00th=[ 75], 80.00th=[ 84], 90.00th=[ 85], 95.00th=[ 89], 00:18:47.429 | 99.00th=[ 96], 99.50th=[ 109], 99.90th=[ 109], 99.95th=[ 111], 00:18:47.429 | 99.99th=[ 115] 00:18:47.429 bw ( KiB/s): min= 832, max= 1619, per=4.24%, avg=970.15, stdev=174.43, samples=20 00:18:47.429 iops : min= 208, max= 404, avg=242.50, stdev=43.46, samples=20 00:18:47.429 lat (msec) : 20=0.57%, 50=23.50%, 100=75.14%, 250=0.78% 00:18:47.429 cpu : usr=34.74%, sys=1.27%, ctx=941, majf=0, minf=9 00:18:47.429 IO depths : 1=0.1%, 2=1.0%, 4=3.7%, 8=79.3%, 16=15.8%, 32=0.0%, >=64=0.0% 00:18:47.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.429 complete : 0=0.0%, 4=88.3%, 8=10.9%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.429 issued rwts: total=2438,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.429 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:47.429 filename2: (groupid=0, jobs=1): err= 0: pid=82008: Tue Nov 26 19:52:41 2024 00:18:47.429 read: IOPS=239, BW=957KiB/s (980kB/s)(9580KiB/10006msec) 00:18:47.429 slat (usec): min=4, max=8039, avg=17.83, stdev=231.80 00:18:47.429 clat (msec): min=6, max=108, avg=66.74, stdev=17.56 00:18:47.429 lat (msec): min=6, max=108, avg=66.75, stdev=17.55 00:18:47.429 clat percentiles (msec): 00:18:47.429 | 1.00th=[ 22], 5.00th=[ 39], 10.00th=[ 48], 20.00th=[ 49], 00:18:47.429 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 71], 60.00th=[ 72], 00:18:47.429 | 70.00th=[ 80], 80.00th=[ 84], 90.00th=[ 86], 95.00th=[ 94], 00:18:47.429 | 99.00th=[ 108], 99.50th=[ 108], 99.90th=[ 109], 99.95th=[ 109], 00:18:47.429 | 99.99th=[ 109] 00:18:47.429 bw ( KiB/s): min= 800, max= 1408, per=4.14%, avg=947.79, stdev=129.80, samples=19 00:18:47.429 iops : min= 200, max= 352, avg=236.95, stdev=32.45, samples=19 00:18:47.429 lat (msec) : 10=0.25%, 20=0.42%, 50=22.76%, 100=75.24%, 250=1.34% 00:18:47.430 cpu : usr=32.42%, sys=0.93%, ctx=916, majf=0, minf=9 00:18:47.430 IO depths : 1=0.1%, 2=1.6%, 4=6.3%, 8=76.7%, 16=15.4%, 32=0.0%, >=64=0.0% 00:18:47.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.430 complete : 0=0.0%, 4=89.0%, 8=9.7%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.430 issued rwts: total=2395,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.430 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:47.430 filename2: (groupid=0, jobs=1): err= 0: pid=82009: Tue Nov 26 19:52:41 2024 00:18:47.430 read: IOPS=233, BW=934KiB/s (956kB/s)(9340KiB/10003msec) 00:18:47.430 slat (nsec): min=3620, max=72075, avg=10160.49, stdev=6394.77 00:18:47.430 clat (msec): min=4, max=131, avg=68.49, stdev=17.59 00:18:47.430 lat (msec): min=4, max=131, avg=68.50, stdev=17.59 00:18:47.430 clat percentiles (msec): 00:18:47.430 | 1.00th=[ 18], 5.00th=[ 39], 10.00th=[ 48], 20.00th=[ 54], 00:18:47.430 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 72], 00:18:47.430 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 88], 95.00th=[ 93], 00:18:47.430 | 99.00th=[ 102], 99.50th=[ 108], 99.90th=[ 108], 99.95th=[ 132], 00:18:47.430 | 99.99th=[ 132] 00:18:47.430 bw ( KiB/s): min= 848, max= 1392, per=4.04%, avg=924.21, stdev=119.45, samples=19 00:18:47.430 iops : min= 212, max= 348, avg=231.05, stdev=29.86, samples=19 00:18:47.430 lat (msec) : 10=0.56%, 20=1.16%, 50=16.92%, 100=80.26%, 250=1.11% 00:18:47.430 cpu : usr=33.20%, sys=1.00%, ctx=1393, majf=0, minf=9 00:18:47.430 IO depths : 1=0.2%, 2=1.5%, 4=5.6%, 8=77.0%, 16=15.8%, 32=0.0%, >=64=0.0% 00:18:47.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.430 complete : 0=0.0%, 4=89.2%, 8=9.6%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.430 issued rwts: total=2335,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.430 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:47.430 filename2: (groupid=0, jobs=1): err= 0: pid=82010: Tue Nov 26 19:52:41 2024 00:18:47.430 read: IOPS=250, BW=1002KiB/s (1026kB/s)(9.82MiB/10033msec) 00:18:47.430 slat (usec): min=5, max=8048, avg=20.18, stdev=277.12 00:18:47.430 clat (msec): min=5, max=120, avg=63.72, stdev=21.36 00:18:47.430 lat (msec): min=5, max=120, avg=63.74, stdev=21.36 00:18:47.430 clat percentiles (msec): 00:18:47.430 | 1.00th=[ 10], 5.00th=[ 16], 10.00th=[ 35], 20.00th=[ 48], 00:18:47.430 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 72], 60.00th=[ 72], 00:18:47.430 | 70.00th=[ 77], 80.00th=[ 84], 90.00th=[ 85], 95.00th=[ 91], 00:18:47.430 | 99.00th=[ 96], 99.50th=[ 99], 99.90th=[ 109], 99.95th=[ 121], 00:18:47.430 | 99.99th=[ 121] 00:18:47.430 bw ( KiB/s): min= 784, max= 2438, per=4.36%, avg=998.70, stdev=356.76, samples=20 00:18:47.430 iops : min= 196, max= 609, avg=249.65, stdev=89.08, samples=20 00:18:47.430 lat (msec) : 10=1.51%, 20=5.77%, 50=19.17%, 100=73.07%, 250=0.48% 00:18:47.430 cpu : usr=35.42%, sys=1.07%, ctx=926, majf=0, minf=9 00:18:47.430 IO depths : 1=0.1%, 2=0.1%, 4=0.4%, 8=82.7%, 16=16.7%, 32=0.0%, >=64=0.0% 00:18:47.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.430 complete : 0=0.0%, 4=87.6%, 8=12.3%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.430 issued rwts: total=2514,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.430 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:47.430 filename2: (groupid=0, jobs=1): err= 0: pid=82011: Tue Nov 26 19:52:41 2024 00:18:47.430 read: IOPS=236, BW=948KiB/s (970kB/s)(9508KiB/10033msec) 00:18:47.430 slat (usec): min=3, max=12039, avg=30.21, stdev=427.04 00:18:47.430 clat (msec): min=10, max=119, avg=67.29, stdev=18.96 00:18:47.430 lat (msec): min=10, max=119, avg=67.32, stdev=18.96 00:18:47.430 clat percentiles (msec): 00:18:47.430 | 1.00th=[ 15], 5.00th=[ 22], 10.00th=[ 48], 20.00th=[ 51], 00:18:47.430 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 73], 00:18:47.430 | 70.00th=[ 81], 80.00th=[ 84], 90.00th=[ 86], 95.00th=[ 94], 00:18:47.430 | 99.00th=[ 101], 99.50th=[ 102], 99.90th=[ 109], 99.95th=[ 109], 00:18:47.430 | 99.99th=[ 121] 00:18:47.430 bw ( KiB/s): min= 840, max= 2019, per=4.14%, avg=946.55, stdev=254.05, samples=20 00:18:47.430 iops : min= 210, max= 504, avg=236.60, stdev=63.35, samples=20 00:18:47.430 lat (msec) : 20=4.71%, 50=14.05%, 100=80.31%, 250=0.93% 00:18:47.430 cpu : usr=33.52%, sys=1.00%, ctx=941, majf=0, minf=9 00:18:47.430 IO depths : 1=0.1%, 2=1.6%, 4=6.5%, 8=76.0%, 16=15.9%, 32=0.0%, >=64=0.0% 00:18:47.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.430 complete : 0=0.0%, 4=89.4%, 8=9.1%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.430 issued rwts: total=2377,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.430 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:47.430 filename2: (groupid=0, jobs=1): err= 0: pid=82012: Tue Nov 26 19:52:41 2024 00:18:47.430 read: IOPS=235, BW=944KiB/s (966kB/s)(9436KiB/10001msec) 00:18:47.430 slat (usec): min=2, max=8018, avg=18.67, stdev=247.27 00:18:47.430 clat (usec): min=687, max=120082, avg=67729.45, stdev=18314.93 00:18:47.430 lat (usec): min=693, max=120089, avg=67748.12, stdev=18313.39 00:18:47.430 clat percentiles (usec): 00:18:47.430 | 1.00th=[ 1680], 5.00th=[ 41681], 10.00th=[ 47973], 20.00th=[ 51119], 00:18:47.430 | 30.00th=[ 58983], 40.00th=[ 68682], 50.00th=[ 71828], 60.00th=[ 72877], 00:18:47.430 | 70.00th=[ 79168], 80.00th=[ 83362], 90.00th=[ 85459], 95.00th=[ 91751], 00:18:47.430 | 99.00th=[107480], 99.50th=[107480], 99.90th=[109577], 99.95th=[120062], 00:18:47.430 | 99.99th=[120062] 00:18:47.430 bw ( KiB/s): min= 752, max= 1264, per=4.00%, avg=916.63, stdev=110.64, samples=19 00:18:47.430 iops : min= 188, max= 316, avg=229.16, stdev=27.66, samples=19 00:18:47.430 lat (usec) : 750=0.34%, 1000=0.04% 00:18:47.430 lat (msec) : 2=0.81%, 4=0.30%, 10=0.38%, 20=0.51%, 50=16.02% 00:18:47.430 lat (msec) : 100=80.20%, 250=1.40% 00:18:47.430 cpu : usr=38.15%, sys=1.25%, ctx=1073, majf=0, minf=9 00:18:47.430 IO depths : 1=0.1%, 2=2.8%, 4=10.8%, 8=71.7%, 16=14.6%, 32=0.0%, >=64=0.0% 00:18:47.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.430 complete : 0=0.0%, 4=90.2%, 8=7.4%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.430 issued rwts: total=2359,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.430 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:47.430 filename2: (groupid=0, jobs=1): err= 0: pid=82013: Tue Nov 26 19:52:41 2024 00:18:47.430 read: IOPS=245, BW=983KiB/s (1007kB/s)(9880KiB/10050msec) 00:18:47.430 slat (usec): min=3, max=8022, avg=23.33, stdev=279.21 00:18:47.430 clat (usec): min=1388, max=119861, avg=64844.21, stdev=26452.01 00:18:47.430 lat (usec): min=1398, max=119867, avg=64867.54, stdev=26457.81 00:18:47.430 clat percentiles (usec): 00:18:47.430 | 1.00th=[ 1532], 5.00th=[ 2147], 10.00th=[ 17433], 20.00th=[ 48497], 00:18:47.430 | 30.00th=[ 58983], 40.00th=[ 68682], 50.00th=[ 71828], 60.00th=[ 76022], 00:18:47.430 | 70.00th=[ 81265], 80.00th=[ 84411], 90.00th=[ 89654], 95.00th=[ 95945], 00:18:47.430 | 99.00th=[107480], 99.50th=[107480], 99.90th=[119014], 99.95th=[119014], 00:18:47.430 | 99.99th=[120062] 00:18:47.430 bw ( KiB/s): min= 656, max= 3312, per=4.29%, avg=981.60, stdev=553.48, samples=20 00:18:47.430 iops : min= 164, max= 828, avg=245.40, stdev=138.37, samples=20 00:18:47.430 lat (msec) : 2=2.27%, 4=4.21%, 10=2.59%, 20=1.38%, 50=11.94% 00:18:47.430 lat (msec) : 100=74.78%, 250=2.83% 00:18:47.430 cpu : usr=38.75%, sys=1.19%, ctx=1083, majf=0, minf=0 00:18:47.430 IO depths : 1=0.6%, 2=3.2%, 4=10.5%, 8=71.1%, 16=14.5%, 32=0.0%, >=64=0.0% 00:18:47.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.430 complete : 0=0.0%, 4=90.4%, 8=7.3%, 16=2.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.430 issued rwts: total=2470,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.430 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:47.430 00:18:47.430 Run status group 0 (all jobs): 00:18:47.430 READ: bw=22.3MiB/s (23.4MB/s), 853KiB/s-1032KiB/s (874kB/s-1057kB/s), io=225MiB (236MB), run=10001-10068msec 00:18:47.430 19:52:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:18:47.430 19:52:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:18:47.430 19:52:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:18:47.430 19:52:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:18:47.430 19:52:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:18:47.430 19:52:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:47.430 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.430 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:47.430 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.430 19:52:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:18:47.430 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.430 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:47.430 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.430 19:52:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:18:47.430 19:52:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:18:47.430 19:52:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:18:47.430 19:52:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:47.430 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.430 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:47.430 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.430 19:52:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:18:47.430 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.430 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:47.430 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.430 19:52:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:18:47.430 19:52:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:18:47.430 19:52:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:18:47.430 19:52:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:18:47.430 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.430 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:47.430 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.430 19:52:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:18:47.430 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.430 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:47.431 bdev_null0 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:47.431 [2024-11-26 19:52:41.326977] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:47.431 bdev_null1 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:47.431 { 00:18:47.431 "params": { 00:18:47.431 "name": "Nvme$subsystem", 00:18:47.431 "trtype": "$TEST_TRANSPORT", 00:18:47.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:47.431 "adrfam": "ipv4", 00:18:47.431 "trsvcid": "$NVMF_PORT", 00:18:47.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:47.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:47.431 "hdgst": ${hdgst:-false}, 00:18:47.431 "ddgst": ${ddgst:-false} 00:18:47.431 }, 00:18:47.431 "method": "bdev_nvme_attach_controller" 00:18:47.431 } 00:18:47.431 EOF 00:18:47.431 )") 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:47.431 { 00:18:47.431 "params": { 00:18:47.431 "name": "Nvme$subsystem", 00:18:47.431 "trtype": "$TEST_TRANSPORT", 00:18:47.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:47.431 "adrfam": "ipv4", 00:18:47.431 "trsvcid": "$NVMF_PORT", 00:18:47.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:47.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:47.431 "hdgst": ${hdgst:-false}, 00:18:47.431 "ddgst": ${ddgst:-false} 00:18:47.431 }, 00:18:47.431 "method": "bdev_nvme_attach_controller" 00:18:47.431 } 00:18:47.431 EOF 00:18:47.431 )") 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:47.431 "params": { 00:18:47.431 "name": "Nvme0", 00:18:47.431 "trtype": "tcp", 00:18:47.431 "traddr": "10.0.0.3", 00:18:47.431 "adrfam": "ipv4", 00:18:47.431 "trsvcid": "4420", 00:18:47.431 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:47.431 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:47.431 "hdgst": false, 00:18:47.431 "ddgst": false 00:18:47.431 }, 00:18:47.431 "method": "bdev_nvme_attach_controller" 00:18:47.431 },{ 00:18:47.431 "params": { 00:18:47.431 "name": "Nvme1", 00:18:47.431 "trtype": "tcp", 00:18:47.431 "traddr": "10.0.0.3", 00:18:47.431 "adrfam": "ipv4", 00:18:47.431 "trsvcid": "4420", 00:18:47.431 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:47.431 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:47.431 "hdgst": false, 00:18:47.431 "ddgst": false 00:18:47.431 }, 00:18:47.431 "method": "bdev_nvme_attach_controller" 00:18:47.431 }' 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:47.431 19:52:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:47.432 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:18:47.432 ... 00:18:47.432 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:18:47.432 ... 00:18:47.432 fio-3.35 00:18:47.432 Starting 4 threads 00:18:52.700 00:18:52.700 filename0: (groupid=0, jobs=1): err= 0: pid=82157: Tue Nov 26 19:52:47 2024 00:18:52.700 read: IOPS=2900, BW=22.7MiB/s (23.8MB/s)(113MiB/5001msec) 00:18:52.700 slat (nsec): min=3758, max=39757, avg=9091.65, stdev=5270.03 00:18:52.700 clat (usec): min=537, max=4637, avg=2733.51, stdev=787.79 00:18:52.700 lat (usec): min=544, max=4643, avg=2742.60, stdev=787.66 00:18:52.700 clat percentiles (usec): 00:18:52.700 | 1.00th=[ 963], 5.00th=[ 1500], 10.00th=[ 1582], 20.00th=[ 1876], 00:18:52.700 | 30.00th=[ 2114], 40.00th=[ 2606], 50.00th=[ 3064], 60.00th=[ 3228], 00:18:52.700 | 70.00th=[ 3326], 80.00th=[ 3425], 90.00th=[ 3523], 95.00th=[ 3621], 00:18:52.700 | 99.00th=[ 4293], 99.50th=[ 4490], 99.90th=[ 4555], 99.95th=[ 4621], 00:18:52.700 | 99.99th=[ 4621] 00:18:52.700 bw ( KiB/s): min=19152, max=24864, per=24.85%, avg=22755.56, stdev=1883.47, samples=9 00:18:52.700 iops : min= 2394, max= 3108, avg=2844.44, stdev=235.43, samples=9 00:18:52.700 lat (usec) : 750=0.11%, 1000=1.25% 00:18:52.700 lat (msec) : 2=24.66%, 4=72.37%, 10=1.61% 00:18:52.700 cpu : usr=94.00%, sys=5.46%, ctx=6, majf=0, minf=9 00:18:52.700 IO depths : 1=0.1%, 2=6.3%, 4=60.8%, 8=32.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:52.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.700 complete : 0=0.0%, 4=97.6%, 8=2.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.700 issued rwts: total=14503,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:52.700 latency : target=0, window=0, percentile=100.00%, depth=8 00:18:52.700 filename0: (groupid=0, jobs=1): err= 0: pid=82158: Tue Nov 26 19:52:47 2024 00:18:52.700 read: IOPS=2735, BW=21.4MiB/s (22.4MB/s)(107MiB/5001msec) 00:18:52.700 slat (nsec): min=3617, max=65316, avg=8569.00, stdev=5208.90 00:18:52.700 clat (usec): min=643, max=5203, avg=2899.01, stdev=743.86 00:18:52.700 lat (usec): min=648, max=5221, avg=2907.58, stdev=743.71 00:18:52.700 clat percentiles (usec): 00:18:52.700 | 1.00th=[ 1172], 5.00th=[ 1500], 10.00th=[ 1598], 20.00th=[ 2089], 00:18:52.700 | 30.00th=[ 2638], 40.00th=[ 3032], 50.00th=[ 3228], 60.00th=[ 3326], 00:18:52.700 | 70.00th=[ 3425], 80.00th=[ 3523], 90.00th=[ 3589], 95.00th=[ 3621], 00:18:52.700 | 99.00th=[ 3916], 99.50th=[ 4228], 99.90th=[ 4490], 99.95th=[ 4490], 00:18:52.700 | 99.99th=[ 4817] 00:18:52.700 bw ( KiB/s): min=17920, max=26400, per=24.18%, avg=22142.22, stdev=2722.77, samples=9 00:18:52.700 iops : min= 2240, max= 3300, avg=2767.78, stdev=340.35, samples=9 00:18:52.700 lat (usec) : 750=0.04%, 1000=0.37% 00:18:52.700 lat (msec) : 2=18.10%, 4=80.66%, 10=0.83% 00:18:52.700 cpu : usr=93.98%, sys=5.50%, ctx=37, majf=0, minf=9 00:18:52.700 IO depths : 1=0.1%, 2=10.7%, 4=58.4%, 8=30.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:52.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.701 complete : 0=0.0%, 4=95.9%, 8=4.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.701 issued rwts: total=13679,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:52.701 latency : target=0, window=0, percentile=100.00%, depth=8 00:18:52.701 filename1: (groupid=0, jobs=1): err= 0: pid=82159: Tue Nov 26 19:52:47 2024 00:18:52.701 read: IOPS=3040, BW=23.8MiB/s (24.9MB/s)(119MiB/5002msec) 00:18:52.701 slat (nsec): min=5516, max=51798, avg=9221.22, stdev=5008.81 00:18:52.701 clat (usec): min=696, max=5308, avg=2607.95, stdev=771.97 00:18:52.701 lat (usec): min=702, max=5315, avg=2617.17, stdev=771.72 00:18:52.701 clat percentiles (usec): 00:18:52.701 | 1.00th=[ 1172], 5.00th=[ 1467], 10.00th=[ 1565], 20.00th=[ 1827], 00:18:52.701 | 30.00th=[ 1958], 40.00th=[ 2212], 50.00th=[ 2769], 60.00th=[ 3097], 00:18:52.701 | 70.00th=[ 3261], 80.00th=[ 3392], 90.00th=[ 3490], 95.00th=[ 3589], 00:18:52.701 | 99.00th=[ 3785], 99.50th=[ 3916], 99.90th=[ 4490], 99.95th=[ 4817], 00:18:52.701 | 99.99th=[ 5211] 00:18:52.701 bw ( KiB/s): min=21968, max=26464, per=26.42%, avg=24197.33, stdev=1299.70, samples=9 00:18:52.701 iops : min= 2746, max= 3308, avg=3024.67, stdev=162.46, samples=9 00:18:52.701 lat (usec) : 750=0.05%, 1000=0.43% 00:18:52.701 lat (msec) : 2=30.60%, 4=68.50%, 10=0.42% 00:18:52.701 cpu : usr=94.06%, sys=5.30%, ctx=29, majf=0, minf=0 00:18:52.701 IO depths : 1=0.1%, 2=2.9%, 4=62.5%, 8=34.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:52.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.701 complete : 0=0.0%, 4=98.9%, 8=1.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.701 issued rwts: total=15210,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:52.701 latency : target=0, window=0, percentile=100.00%, depth=8 00:18:52.701 filename1: (groupid=0, jobs=1): err= 0: pid=82160: Tue Nov 26 19:52:47 2024 00:18:52.701 read: IOPS=2772, BW=21.7MiB/s (22.7MB/s)(108MiB/5001msec) 00:18:52.701 slat (nsec): min=3821, max=38638, avg=9618.85, stdev=5261.34 00:18:52.701 clat (usec): min=451, max=5269, avg=2856.73, stdev=721.66 00:18:52.701 lat (usec): min=457, max=5276, avg=2866.35, stdev=721.57 00:18:52.701 clat percentiles (usec): 00:18:52.701 | 1.00th=[ 1172], 5.00th=[ 1516], 10.00th=[ 1745], 20.00th=[ 2024], 00:18:52.701 | 30.00th=[ 2442], 40.00th=[ 2966], 50.00th=[ 3228], 60.00th=[ 3294], 00:18:52.701 | 70.00th=[ 3359], 80.00th=[ 3425], 90.00th=[ 3523], 95.00th=[ 3621], 00:18:52.701 | 99.00th=[ 3884], 99.50th=[ 4146], 99.90th=[ 4424], 99.95th=[ 4490], 00:18:52.701 | 99.99th=[ 5080] 00:18:52.701 bw ( KiB/s): min=19152, max=24864, per=24.52%, avg=22455.11, stdev=1970.63, samples=9 00:18:52.701 iops : min= 2394, max= 3108, avg=2806.89, stdev=246.33, samples=9 00:18:52.701 lat (usec) : 500=0.01%, 750=0.04%, 1000=0.42% 00:18:52.701 lat (msec) : 2=19.10%, 4=79.74%, 10=0.70% 00:18:52.701 cpu : usr=94.42%, sys=5.02%, ctx=8, majf=0, minf=10 00:18:52.701 IO depths : 1=0.1%, 2=9.6%, 4=59.0%, 8=31.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:52.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.701 complete : 0=0.0%, 4=96.3%, 8=3.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.701 issued rwts: total=13867,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:52.701 latency : target=0, window=0, percentile=100.00%, depth=8 00:18:52.701 00:18:52.701 Run status group 0 (all jobs): 00:18:52.701 READ: bw=89.4MiB/s (93.8MB/s), 21.4MiB/s-23.8MiB/s (22.4MB/s-24.9MB/s), io=447MiB (469MB), run=5001-5002msec 00:18:52.701 19:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:18:52.701 19:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:18:52.701 19:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:18:52.701 19:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:18:52.701 19:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:18:52.701 19:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:52.701 19:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.701 19:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:52.701 19:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.701 19:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:18:52.701 19:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.701 19:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:52.701 19:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.701 19:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:18:52.701 19:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:18:52.701 19:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:18:52.701 19:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:52.701 19:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.701 19:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:52.701 19:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.701 19:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:18:52.701 19:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.701 19:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:52.701 ************************************ 00:18:52.701 END TEST fio_dif_rand_params 00:18:52.701 ************************************ 00:18:52.701 19:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.701 00:18:52.701 real 0m22.984s 00:18:52.701 user 2m7.388s 00:18:52.701 sys 0m5.685s 00:18:52.701 19:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:52.701 19:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:18:52.701 19:52:47 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:18:52.701 19:52:47 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:52.701 19:52:47 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:52.701 19:52:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:18:52.701 ************************************ 00:18:52.701 START TEST fio_dif_digest 00:18:52.701 ************************************ 00:18:52.701 19:52:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:18:52.701 19:52:47 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:18:52.701 19:52:47 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:18:52.701 19:52:47 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:18:52.701 19:52:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:18:52.701 19:52:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:18:52.701 19:52:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:18:52.701 19:52:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:18:52.701 19:52:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:18:52.701 19:52:47 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:18:52.701 19:52:47 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:18:52.701 19:52:47 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:18:52.701 19:52:47 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:18:52.701 19:52:47 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:18:52.701 19:52:47 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:18:52.701 19:52:47 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:18:52.701 19:52:47 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:18:52.701 19:52:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.701 19:52:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:18:52.701 bdev_null0 00:18:52.701 19:52:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.701 19:52:47 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:52.701 19:52:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.701 19:52:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:18:52.701 19:52:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.701 19:52:47 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:52.701 19:52:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.701 19:52:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:18:52.701 19:52:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.701 19:52:47 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:52.701 19:52:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.701 19:52:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:18:52.701 [2024-11-26 19:52:47.309351] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:52.701 19:52:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.701 19:52:47 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:18:52.701 19:52:47 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:18:52.701 19:52:47 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:52.701 19:52:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:52.701 19:52:47 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:18:52.701 19:52:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:52.701 19:52:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:18:52.701 19:52:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:52.701 19:52:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:52.701 19:52:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:18:52.701 19:52:47 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:18:52.701 19:52:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:52.701 19:52:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:18:52.701 19:52:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:52.701 19:52:47 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:18:52.701 19:52:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:52.702 19:52:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:52.702 { 00:18:52.702 "params": { 00:18:52.702 "name": "Nvme$subsystem", 00:18:52.702 "trtype": "$TEST_TRANSPORT", 00:18:52.702 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:52.702 "adrfam": "ipv4", 00:18:52.702 "trsvcid": "$NVMF_PORT", 00:18:52.702 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:52.702 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:52.702 "hdgst": ${hdgst:-false}, 00:18:52.702 "ddgst": ${ddgst:-false} 00:18:52.702 }, 00:18:52.702 "method": "bdev_nvme_attach_controller" 00:18:52.702 } 00:18:52.702 EOF 00:18:52.702 )") 00:18:52.702 19:52:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:52.702 19:52:47 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:18:52.702 19:52:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:52.702 19:52:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:18:52.702 19:52:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:52.702 19:52:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:18:52.702 19:52:47 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:18:52.702 19:52:47 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:18:52.702 19:52:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:18:52.702 19:52:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:18:52.702 19:52:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:52.702 "params": { 00:18:52.702 "name": "Nvme0", 00:18:52.702 "trtype": "tcp", 00:18:52.702 "traddr": "10.0.0.3", 00:18:52.702 "adrfam": "ipv4", 00:18:52.702 "trsvcid": "4420", 00:18:52.702 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:52.702 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:52.702 "hdgst": true, 00:18:52.702 "ddgst": true 00:18:52.702 }, 00:18:52.702 "method": "bdev_nvme_attach_controller" 00:18:52.702 }' 00:18:52.702 19:52:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:52.702 19:52:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:52.702 19:52:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:52.702 19:52:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:52.702 19:52:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:52.702 19:52:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:18:52.702 19:52:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:52.702 19:52:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:52.702 19:52:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:52.702 19:52:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:52.702 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:18:52.702 ... 00:18:52.702 fio-3.35 00:18:52.702 Starting 3 threads 00:19:02.807 00:19:02.807 filename0: (groupid=0, jobs=1): err= 0: pid=82271: Tue Nov 26 19:52:57 2024 00:19:02.807 read: IOPS=314, BW=39.4MiB/s (41.3MB/s)(394MiB/10001msec) 00:19:02.807 slat (nsec): min=5846, max=58023, avg=7210.36, stdev=2074.87 00:19:02.807 clat (usec): min=5812, max=9983, avg=9508.06, stdev=179.29 00:19:02.807 lat (usec): min=5823, max=9992, avg=9515.27, stdev=179.03 00:19:02.807 clat percentiles (usec): 00:19:02.807 | 1.00th=[ 9241], 5.00th=[ 9372], 10.00th=[ 9372], 20.00th=[ 9372], 00:19:02.807 | 30.00th=[ 9372], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[ 9634], 00:19:02.807 | 70.00th=[ 9634], 80.00th=[ 9634], 90.00th=[ 9634], 95.00th=[ 9634], 00:19:02.807 | 99.00th=[ 9765], 99.50th=[ 9896], 99.90th=[ 9896], 99.95th=[10028], 00:19:02.807 | 99.99th=[10028] 00:19:02.807 bw ( KiB/s): min=39936, max=41472, per=33.36%, avg=40340.21, stdev=469.84, samples=19 00:19:02.807 iops : min= 312, max= 324, avg=315.16, stdev= 3.67, samples=19 00:19:02.807 lat (msec) : 10=100.00% 00:19:02.807 cpu : usr=92.77%, sys=6.57%, ctx=165, majf=0, minf=0 00:19:02.807 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:02.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.807 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.807 issued rwts: total=3150,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:02.807 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:02.807 filename0: (groupid=0, jobs=1): err= 0: pid=82272: Tue Nov 26 19:52:57 2024 00:19:02.807 read: IOPS=314, BW=39.4MiB/s (41.3MB/s)(394MiB/10003msec) 00:19:02.807 slat (nsec): min=3024, max=35299, avg=9098.22, stdev=5031.85 00:19:02.807 clat (usec): min=6532, max=11331, avg=9505.86, stdev=180.69 00:19:02.807 lat (usec): min=6538, max=11341, avg=9514.96, stdev=180.27 00:19:02.807 clat percentiles (usec): 00:19:02.807 | 1.00th=[ 9241], 5.00th=[ 9241], 10.00th=[ 9372], 20.00th=[ 9372], 00:19:02.807 | 30.00th=[ 9372], 40.00th=[ 9503], 50.00th=[ 9503], 60.00th=[ 9634], 00:19:02.807 | 70.00th=[ 9634], 80.00th=[ 9634], 90.00th=[ 9634], 95.00th=[ 9634], 00:19:02.807 | 99.00th=[ 9765], 99.50th=[ 9896], 99.90th=[ 9896], 99.95th=[11338], 00:19:02.807 | 99.99th=[11338] 00:19:02.807 bw ( KiB/s): min=39936, max=40704, per=33.33%, avg=40299.79, stdev=393.98, samples=19 00:19:02.807 iops : min= 312, max= 318, avg=314.84, stdev= 3.08, samples=19 00:19:02.807 lat (msec) : 10=99.90%, 20=0.10% 00:19:02.807 cpu : usr=93.00%, sys=6.60%, ctx=9, majf=0, minf=0 00:19:02.807 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:02.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.807 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.807 issued rwts: total=3150,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:02.807 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:02.807 filename0: (groupid=0, jobs=1): err= 0: pid=82273: Tue Nov 26 19:52:57 2024 00:19:02.807 read: IOPS=314, BW=39.4MiB/s (41.3MB/s)(394MiB/10003msec) 00:19:02.807 slat (nsec): min=3800, max=53302, avg=9657.42, stdev=5688.20 00:19:02.807 clat (usec): min=6519, max=10280, avg=9504.39, stdev=170.98 00:19:02.807 lat (usec): min=6526, max=10293, avg=9514.05, stdev=170.74 00:19:02.807 clat percentiles (usec): 00:19:02.807 | 1.00th=[ 9241], 5.00th=[ 9241], 10.00th=[ 9372], 20.00th=[ 9372], 00:19:02.807 | 30.00th=[ 9372], 40.00th=[ 9503], 50.00th=[ 9503], 60.00th=[ 9634], 00:19:02.807 | 70.00th=[ 9634], 80.00th=[ 9634], 90.00th=[ 9634], 95.00th=[ 9634], 00:19:02.807 | 99.00th=[ 9765], 99.50th=[ 9896], 99.90th=[ 9896], 99.95th=[10290], 00:19:02.807 | 99.99th=[10290] 00:19:02.807 bw ( KiB/s): min=39936, max=40704, per=33.33%, avg=40299.79, stdev=393.98, samples=19 00:19:02.807 iops : min= 312, max= 318, avg=314.84, stdev= 3.08, samples=19 00:19:02.807 lat (msec) : 10=99.90%, 20=0.10% 00:19:02.807 cpu : usr=93.15%, sys=6.43%, ctx=21, majf=0, minf=0 00:19:02.807 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:02.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.807 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.807 issued rwts: total=3150,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:02.807 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:02.808 00:19:02.808 Run status group 0 (all jobs): 00:19:02.808 READ: bw=118MiB/s (124MB/s), 39.4MiB/s-39.4MiB/s (41.3MB/s-41.3MB/s), io=1181MiB (1239MB), run=10001-10003msec 00:19:03.066 19:52:58 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:19:03.066 19:52:58 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:19:03.066 19:52:58 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:19:03.066 19:52:58 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:03.066 19:52:58 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:19:03.066 19:52:58 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:03.066 19:52:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.066 19:52:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:03.066 19:52:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.066 19:52:58 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:03.066 19:52:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.066 19:52:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:03.066 ************************************ 00:19:03.066 END TEST fio_dif_digest 00:19:03.066 ************************************ 00:19:03.066 19:52:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.066 00:19:03.066 real 0m10.813s 00:19:03.066 user 0m28.417s 00:19:03.066 sys 0m2.137s 00:19:03.066 19:52:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:03.066 19:52:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:03.066 19:52:58 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:19:03.066 19:52:58 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:19:03.066 19:52:58 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:03.066 19:52:58 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:19:03.066 19:52:58 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:03.066 19:52:58 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:19:03.066 19:52:58 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:03.066 19:52:58 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:03.066 rmmod nvme_tcp 00:19:03.066 rmmod nvme_fabrics 00:19:03.066 rmmod nvme_keyring 00:19:03.066 19:52:58 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:03.066 19:52:58 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:19:03.066 19:52:58 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:19:03.066 19:52:58 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 81501 ']' 00:19:03.066 19:52:58 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 81501 00:19:03.066 19:52:58 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 81501 ']' 00:19:03.066 19:52:58 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 81501 00:19:03.066 19:52:58 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:19:03.066 19:52:58 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:03.066 19:52:58 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81501 00:19:03.066 killing process with pid 81501 00:19:03.066 19:52:58 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:03.066 19:52:58 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:03.066 19:52:58 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81501' 00:19:03.066 19:52:58 nvmf_dif -- common/autotest_common.sh@973 -- # kill 81501 00:19:03.066 19:52:58 nvmf_dif -- common/autotest_common.sh@978 -- # wait 81501 00:19:03.325 19:52:58 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:19:03.325 19:52:58 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:03.325 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:03.583 Waiting for block devices as requested 00:19:03.583 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:03.583 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:03.583 19:52:58 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:03.583 19:52:58 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:03.583 19:52:58 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:19:03.583 19:52:58 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:19:03.583 19:52:58 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:03.583 19:52:58 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:19:03.583 19:52:58 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:03.583 19:52:58 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:03.583 19:52:58 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:03.583 19:52:58 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:03.583 19:52:58 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:03.583 19:52:58 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:03.583 19:52:58 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:03.583 19:52:58 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:03.583 19:52:58 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:03.583 19:52:58 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:03.583 19:52:58 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:03.841 19:52:58 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:03.841 19:52:58 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:03.841 19:52:58 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:03.841 19:52:58 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:03.841 19:52:58 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:03.841 19:52:58 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:03.841 19:52:58 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:03.841 19:52:58 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:03.841 19:52:58 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:19:03.841 00:19:03.841 real 0m58.268s 00:19:03.841 user 3m51.766s 00:19:03.841 sys 0m14.106s 00:19:03.841 19:52:58 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:03.841 19:52:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:03.841 ************************************ 00:19:03.841 END TEST nvmf_dif 00:19:03.841 ************************************ 00:19:03.841 19:52:59 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:19:03.841 19:52:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:03.841 19:52:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:03.841 19:52:59 -- common/autotest_common.sh@10 -- # set +x 00:19:03.841 ************************************ 00:19:03.841 START TEST nvmf_abort_qd_sizes 00:19:03.841 ************************************ 00:19:03.841 19:52:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:19:03.841 * Looking for test storage... 00:19:03.841 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:03.841 19:52:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:03.841 19:52:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:19:03.841 19:52:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:04.101 19:52:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:04.101 19:52:59 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:04.101 19:52:59 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:04.101 19:52:59 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:04.101 19:52:59 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:19:04.101 19:52:59 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:19:04.101 19:52:59 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:19:04.101 19:52:59 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:19:04.101 19:52:59 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:19:04.101 19:52:59 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:19:04.101 19:52:59 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:19:04.101 19:52:59 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:04.101 19:52:59 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:19:04.101 19:52:59 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:19:04.101 19:52:59 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:04.101 19:52:59 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:04.101 19:52:59 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:19:04.101 19:52:59 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:19:04.101 19:52:59 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:04.101 19:52:59 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:19:04.101 19:52:59 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:19:04.101 19:52:59 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:19:04.101 19:52:59 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:19:04.101 19:52:59 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:04.101 19:52:59 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:19:04.101 19:52:59 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:19:04.101 19:52:59 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:04.101 19:52:59 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:04.101 19:52:59 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:19:04.101 19:52:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:04.101 19:52:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:04.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:04.101 --rc genhtml_branch_coverage=1 00:19:04.101 --rc genhtml_function_coverage=1 00:19:04.101 --rc genhtml_legend=1 00:19:04.101 --rc geninfo_all_blocks=1 00:19:04.101 --rc geninfo_unexecuted_blocks=1 00:19:04.101 00:19:04.101 ' 00:19:04.101 19:52:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:04.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:04.101 --rc genhtml_branch_coverage=1 00:19:04.101 --rc genhtml_function_coverage=1 00:19:04.101 --rc genhtml_legend=1 00:19:04.101 --rc geninfo_all_blocks=1 00:19:04.101 --rc geninfo_unexecuted_blocks=1 00:19:04.101 00:19:04.101 ' 00:19:04.101 19:52:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:04.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:04.101 --rc genhtml_branch_coverage=1 00:19:04.101 --rc genhtml_function_coverage=1 00:19:04.101 --rc genhtml_legend=1 00:19:04.101 --rc geninfo_all_blocks=1 00:19:04.101 --rc geninfo_unexecuted_blocks=1 00:19:04.101 00:19:04.101 ' 00:19:04.101 19:52:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:04.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:04.101 --rc genhtml_branch_coverage=1 00:19:04.101 --rc genhtml_function_coverage=1 00:19:04.101 --rc genhtml_legend=1 00:19:04.101 --rc geninfo_all_blocks=1 00:19:04.101 --rc geninfo_unexecuted_blocks=1 00:19:04.101 00:19:04.101 ' 00:19:04.101 19:52:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:04.101 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:19:04.101 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:04.101 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:04.101 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:04.101 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:04.101 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:04.101 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:04.101 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:04.101 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:04.101 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:04.101 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:04.101 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:19:04.101 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=91838eb1-5852-43eb-90b2-09876f360ab2 00:19:04.101 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:04.101 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:04.101 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:04.101 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:04.101 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:04.101 19:52:59 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:19:04.101 19:52:59 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:04.101 19:52:59 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:04.102 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:04.102 Cannot find device "nvmf_init_br" 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:04.102 Cannot find device "nvmf_init_br2" 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:04.102 Cannot find device "nvmf_tgt_br" 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:04.102 Cannot find device "nvmf_tgt_br2" 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:04.102 Cannot find device "nvmf_init_br" 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:04.102 Cannot find device "nvmf_init_br2" 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:04.102 Cannot find device "nvmf_tgt_br" 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:04.102 Cannot find device "nvmf_tgt_br2" 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:04.102 Cannot find device "nvmf_br" 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:04.102 Cannot find device "nvmf_init_if" 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:04.102 Cannot find device "nvmf_init_if2" 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:04.102 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:04.102 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:04.102 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:04.361 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:04.361 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:04.361 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:04.361 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:04.361 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:04.361 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:04.361 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:04.361 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:04.361 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:04.361 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:04.361 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:04.361 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:04.361 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:04.361 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:04.361 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:04.361 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:04.361 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:04.361 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:04.361 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:04.361 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:04.361 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:04.361 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:04.361 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:04.361 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:04.361 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:04.361 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:04.361 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:04.361 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:19:04.361 00:19:04.361 --- 10.0.0.3 ping statistics --- 00:19:04.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:04.361 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:19:04.361 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:04.361 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:04.361 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.029 ms 00:19:04.361 00:19:04.361 --- 10.0.0.4 ping statistics --- 00:19:04.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:04.361 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:19:04.361 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:04.361 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:04.361 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:19:04.361 00:19:04.361 --- 10.0.0.1 ping statistics --- 00:19:04.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:04.361 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:19:04.361 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:04.361 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:04.361 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.039 ms 00:19:04.361 00:19:04.361 --- 10.0.0.2 ping statistics --- 00:19:04.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:04.361 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:19:04.361 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:04.361 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:19:04.361 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:19:04.361 19:52:59 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:04.928 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:04.928 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:04.928 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:04.928 19:53:00 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:04.928 19:53:00 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:04.928 19:53:00 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:04.928 19:53:00 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:04.928 19:53:00 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:04.928 19:53:00 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:04.928 19:53:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:19:04.928 19:53:00 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:04.928 19:53:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:04.928 19:53:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:19:04.928 19:53:00 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:19:04.928 19:53:00 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=82917 00:19:04.928 19:53:00 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 82917 00:19:04.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:04.928 19:53:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 82917 ']' 00:19:04.928 19:53:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:04.928 19:53:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:04.928 19:53:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:04.928 19:53:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:04.928 19:53:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:19:04.928 [2024-11-26 19:53:00.163587] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:19:04.928 [2024-11-26 19:53:00.163640] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:05.186 [2024-11-26 19:53:00.292851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:05.187 [2024-11-26 19:53:00.324974] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:05.187 [2024-11-26 19:53:00.325014] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:05.187 [2024-11-26 19:53:00.325019] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:05.187 [2024-11-26 19:53:00.325023] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:05.187 [2024-11-26 19:53:00.325027] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:05.187 [2024-11-26 19:53:00.325623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:05.187 [2024-11-26 19:53:00.325797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:05.187 [2024-11-26 19:53:00.326238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:05.187 [2024-11-26 19:53:00.326397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:05.187 [2024-11-26 19:53:00.354567] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:05.754 19:53:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:05.754 19:53:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:19:05.754 19:53:00 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:05.754 19:53:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:05.754 19:53:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:06.013 19:53:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:19:06.013 ************************************ 00:19:06.013 START TEST spdk_target_abort 00:19:06.013 ************************************ 00:19:06.013 19:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:19:06.013 19:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:19:06.013 19:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:19:06.013 19:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.013 19:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:06.013 spdk_targetn1 00:19:06.013 19:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.014 19:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:06.014 19:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.014 19:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:06.014 [2024-11-26 19:53:01.110096] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:06.014 19:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.014 19:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:19:06.014 19:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.014 19:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:06.014 19:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.014 19:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:19:06.014 19:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.014 19:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:06.014 19:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.014 19:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:19:06.014 19:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.014 19:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:06.014 [2024-11-26 19:53:01.146109] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:06.014 19:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.014 19:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:19:06.014 19:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:19:06.014 19:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:19:06.014 19:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:19:06.014 19:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:19:06.014 19:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:19:06.014 19:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:19:06.014 19:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:19:06.014 19:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:19:06.014 19:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:06.014 19:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:19:06.014 19:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:06.014 19:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:19:06.014 19:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:06.014 19:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:19:06.014 19:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:06.014 19:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:19:06.014 19:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:06.014 19:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:06.014 19:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:06.014 19:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:09.293 Initializing NVMe Controllers 00:19:09.293 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:19:09.293 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:19:09.293 Initialization complete. Launching workers. 00:19:09.293 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 16797, failed: 0 00:19:09.293 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1018, failed to submit 15779 00:19:09.293 success 815, unsuccessful 203, failed 0 00:19:09.293 19:53:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:09.293 19:53:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:12.573 Initializing NVMe Controllers 00:19:12.573 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:19:12.573 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:19:12.573 Initialization complete. Launching workers. 00:19:12.573 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9000, failed: 0 00:19:12.573 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1150, failed to submit 7850 00:19:12.573 success 411, unsuccessful 739, failed 0 00:19:12.573 19:53:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:12.573 19:53:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:15.854 Initializing NVMe Controllers 00:19:15.854 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:19:15.854 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:19:15.854 Initialization complete. Launching workers. 00:19:15.854 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38519, failed: 0 00:19:15.854 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2292, failed to submit 36227 00:19:15.854 success 557, unsuccessful 1735, failed 0 00:19:15.854 19:53:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:19:15.855 19:53:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.855 19:53:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:15.855 19:53:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.855 19:53:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:19:15.855 19:53:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.855 19:53:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:17.756 19:53:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.756 19:53:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 82917 00:19:17.756 19:53:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 82917 ']' 00:19:17.756 19:53:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 82917 00:19:17.756 19:53:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:19:17.756 19:53:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:17.756 19:53:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82917 00:19:17.756 killing process with pid 82917 00:19:17.756 19:53:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:17.756 19:53:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:17.756 19:53:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82917' 00:19:17.756 19:53:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 82917 00:19:17.756 19:53:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 82917 00:19:18.093 00:19:18.093 real 0m12.045s 00:19:18.093 user 0m47.655s 00:19:18.093 sys 0m1.801s 00:19:18.093 19:53:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:18.093 ************************************ 00:19:18.093 END TEST spdk_target_abort 00:19:18.093 19:53:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:18.093 ************************************ 00:19:18.093 19:53:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:19:18.093 19:53:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:18.093 19:53:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:18.093 19:53:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:19:18.093 ************************************ 00:19:18.093 START TEST kernel_target_abort 00:19:18.093 ************************************ 00:19:18.093 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:19:18.093 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:19:18.093 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:19:18.093 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:18.093 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:18.093 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:18.093 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:18.093 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:18.093 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:18.093 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:18.093 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:18.093 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:18.093 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:19:18.093 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:19:18.093 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:19:18.093 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:18.093 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:18.093 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:18.093 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:19:18.093 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:19:18.093 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:19:18.093 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:18.093 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:18.351 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:18.351 Waiting for block devices as requested 00:19:18.351 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:18.351 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:18.351 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:18.351 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:18.351 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:19:18.351 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:19:18.351 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:18.351 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:18.351 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:19:18.351 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:19:18.351 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:19:18.610 No valid GPT data, bailing 00:19:18.610 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:18.610 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:19:18.610 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:19:18.610 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:19:18.610 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:18.610 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:19:18.610 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:19:18.610 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:19:18.610 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:19:18.610 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:18.610 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:19:18.610 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:19:18.610 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:19:18.610 No valid GPT data, bailing 00:19:18.610 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:19:18.610 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:19:18.610 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:19:18.610 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:19:18.610 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:18.610 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:19:18.610 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:19:18.610 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:19:18.610 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:19:18.610 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:18.610 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:19:18.610 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:19:18.610 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:19:18.610 No valid GPT data, bailing 00:19:18.610 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:19:18.610 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:19:18.610 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:19:18.610 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:19:18.610 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:18.610 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:19:18.611 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:19:18.611 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:19:18.611 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:18.611 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:18.611 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:19:18.611 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:19:18.611 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:19:18.611 No valid GPT data, bailing 00:19:18.611 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:18.611 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:19:18.611 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:19:18.611 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:19:18.611 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:19:18.611 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:18.611 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:18.611 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:18.611 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:19:18.611 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:19:18.611 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:19:18.611 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:19:18.611 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:19:18.611 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:19:18.611 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:19:18.611 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:19:18.611 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:18.611 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 --hostid=91838eb1-5852-43eb-90b2-09876f360ab2 -a 10.0.0.1 -t tcp -s 4420 00:19:18.611 00:19:18.611 Discovery Log Number of Records 2, Generation counter 2 00:19:18.611 =====Discovery Log Entry 0====== 00:19:18.611 trtype: tcp 00:19:18.611 adrfam: ipv4 00:19:18.611 subtype: current discovery subsystem 00:19:18.611 treq: not specified, sq flow control disable supported 00:19:18.611 portid: 1 00:19:18.611 trsvcid: 4420 00:19:18.611 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:18.611 traddr: 10.0.0.1 00:19:18.611 eflags: none 00:19:18.611 sectype: none 00:19:18.611 =====Discovery Log Entry 1====== 00:19:18.611 trtype: tcp 00:19:18.611 adrfam: ipv4 00:19:18.611 subtype: nvme subsystem 00:19:18.611 treq: not specified, sq flow control disable supported 00:19:18.611 portid: 1 00:19:18.611 trsvcid: 4420 00:19:18.611 subnqn: nqn.2016-06.io.spdk:testnqn 00:19:18.611 traddr: 10.0.0.1 00:19:18.611 eflags: none 00:19:18.611 sectype: none 00:19:18.611 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:19:18.611 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:19:18.611 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:19:18.611 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:19:18.611 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:19:18.611 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:19:18.611 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:19:18.611 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:19:18.611 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:19:18.611 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:18.611 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:19:18.611 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:18.611 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:19:18.611 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:18.611 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:19:18.611 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:18.611 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:19:18.611 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:18.611 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:18.611 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:18.611 19:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:21.893 Initializing NVMe Controllers 00:19:21.893 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:19:21.893 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:19:21.893 Initialization complete. Launching workers. 00:19:21.893 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 56455, failed: 0 00:19:21.893 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 56455, failed to submit 0 00:19:21.893 success 0, unsuccessful 56455, failed 0 00:19:21.893 19:53:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:21.893 19:53:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:25.172 Initializing NVMe Controllers 00:19:25.172 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:19:25.172 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:19:25.172 Initialization complete. Launching workers. 00:19:25.172 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 92396, failed: 0 00:19:25.172 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 38098, failed to submit 54298 00:19:25.172 success 0, unsuccessful 38098, failed 0 00:19:25.172 19:53:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:25.172 19:53:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:28.452 Initializing NVMe Controllers 00:19:28.452 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:19:28.452 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:19:28.452 Initialization complete. Launching workers. 00:19:28.452 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 104173, failed: 0 00:19:28.452 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 26066, failed to submit 78107 00:19:28.452 success 0, unsuccessful 26066, failed 0 00:19:28.452 19:53:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:19:28.452 19:53:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:19:28.452 19:53:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:19:28.453 19:53:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:28.453 19:53:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:28.453 19:53:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:28.453 19:53:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:28.453 19:53:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:19:28.453 19:53:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:19:28.453 19:53:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:28.709 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:35.269 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:35.269 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:35.269 00:19:35.269 real 0m16.651s 00:19:35.269 user 0m7.192s 00:19:35.269 sys 0m7.428s 00:19:35.269 19:53:29 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:35.269 ************************************ 00:19:35.269 END TEST kernel_target_abort 00:19:35.269 ************************************ 00:19:35.269 19:53:29 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:19:35.269 19:53:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:19:35.269 19:53:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:19:35.269 19:53:29 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:35.269 19:53:29 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:19:35.269 19:53:29 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:35.269 19:53:29 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:19:35.269 19:53:29 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:35.269 19:53:29 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:35.269 rmmod nvme_tcp 00:19:35.269 rmmod nvme_fabrics 00:19:35.269 rmmod nvme_keyring 00:19:35.269 19:53:29 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:35.269 19:53:29 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:19:35.269 19:53:29 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:19:35.269 19:53:29 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 82917 ']' 00:19:35.269 19:53:29 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 82917 00:19:35.269 19:53:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 82917 ']' 00:19:35.269 19:53:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 82917 00:19:35.269 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (82917) - No such process 00:19:35.269 Process with pid 82917 is not found 00:19:35.269 19:53:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 82917 is not found' 00:19:35.269 19:53:29 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:19:35.269 19:53:29 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:35.269 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:35.269 Waiting for block devices as requested 00:19:35.269 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:35.269 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:35.269 19:53:30 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:35.269 19:53:30 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:35.269 19:53:30 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:19:35.269 19:53:30 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:19:35.269 19:53:30 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:35.269 19:53:30 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:19:35.269 19:53:30 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:35.269 19:53:30 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:35.269 19:53:30 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:35.269 19:53:30 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:35.269 19:53:30 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:35.269 19:53:30 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:35.269 19:53:30 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:35.269 19:53:30 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:35.269 19:53:30 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:35.269 19:53:30 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:35.269 19:53:30 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:35.269 19:53:30 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:35.269 19:53:30 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:35.269 19:53:30 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:35.269 19:53:30 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:35.269 19:53:30 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:35.269 19:53:30 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:35.269 19:53:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:35.269 19:53:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:35.542 19:53:30 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:19:35.542 00:19:35.542 real 0m31.520s 00:19:35.542 user 0m55.777s 00:19:35.542 sys 0m10.285s 00:19:35.542 19:53:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:35.542 19:53:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:19:35.542 ************************************ 00:19:35.542 END TEST nvmf_abort_qd_sizes 00:19:35.542 ************************************ 00:19:35.542 19:53:30 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:19:35.542 19:53:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:35.542 19:53:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:35.542 19:53:30 -- common/autotest_common.sh@10 -- # set +x 00:19:35.542 ************************************ 00:19:35.542 START TEST keyring_file 00:19:35.542 ************************************ 00:19:35.542 19:53:30 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:19:35.542 * Looking for test storage... 00:19:35.542 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:19:35.542 19:53:30 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:35.542 19:53:30 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:19:35.542 19:53:30 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:35.542 19:53:30 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:35.542 19:53:30 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:35.542 19:53:30 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:35.542 19:53:30 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:35.542 19:53:30 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:19:35.542 19:53:30 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:19:35.542 19:53:30 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:19:35.542 19:53:30 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:19:35.542 19:53:30 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:19:35.542 19:53:30 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:19:35.542 19:53:30 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:19:35.542 19:53:30 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:35.542 19:53:30 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:19:35.542 19:53:30 keyring_file -- scripts/common.sh@345 -- # : 1 00:19:35.542 19:53:30 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:35.542 19:53:30 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:35.542 19:53:30 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:19:35.542 19:53:30 keyring_file -- scripts/common.sh@353 -- # local d=1 00:19:35.542 19:53:30 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:35.542 19:53:30 keyring_file -- scripts/common.sh@355 -- # echo 1 00:19:35.542 19:53:30 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:19:35.542 19:53:30 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:19:35.542 19:53:30 keyring_file -- scripts/common.sh@353 -- # local d=2 00:19:35.542 19:53:30 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:35.542 19:53:30 keyring_file -- scripts/common.sh@355 -- # echo 2 00:19:35.542 19:53:30 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:19:35.542 19:53:30 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:35.542 19:53:30 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:35.542 19:53:30 keyring_file -- scripts/common.sh@368 -- # return 0 00:19:35.542 19:53:30 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:35.542 19:53:30 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:35.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.542 --rc genhtml_branch_coverage=1 00:19:35.542 --rc genhtml_function_coverage=1 00:19:35.542 --rc genhtml_legend=1 00:19:35.542 --rc geninfo_all_blocks=1 00:19:35.543 --rc geninfo_unexecuted_blocks=1 00:19:35.543 00:19:35.543 ' 00:19:35.543 19:53:30 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:35.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.543 --rc genhtml_branch_coverage=1 00:19:35.543 --rc genhtml_function_coverage=1 00:19:35.543 --rc genhtml_legend=1 00:19:35.543 --rc geninfo_all_blocks=1 00:19:35.543 --rc geninfo_unexecuted_blocks=1 00:19:35.543 00:19:35.543 ' 00:19:35.543 19:53:30 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:35.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.543 --rc genhtml_branch_coverage=1 00:19:35.543 --rc genhtml_function_coverage=1 00:19:35.543 --rc genhtml_legend=1 00:19:35.543 --rc geninfo_all_blocks=1 00:19:35.543 --rc geninfo_unexecuted_blocks=1 00:19:35.543 00:19:35.543 ' 00:19:35.543 19:53:30 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:35.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.543 --rc genhtml_branch_coverage=1 00:19:35.543 --rc genhtml_function_coverage=1 00:19:35.543 --rc genhtml_legend=1 00:19:35.543 --rc geninfo_all_blocks=1 00:19:35.543 --rc geninfo_unexecuted_blocks=1 00:19:35.543 00:19:35.543 ' 00:19:35.543 19:53:30 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:19:35.543 19:53:30 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:35.543 19:53:30 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:19:35.543 19:53:30 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:35.543 19:53:30 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:35.543 19:53:30 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:35.543 19:53:30 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:35.543 19:53:30 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:35.543 19:53:30 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:35.543 19:53:30 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:35.543 19:53:30 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:35.543 19:53:30 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:35.543 19:53:30 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:35.543 19:53:30 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:19:35.543 19:53:30 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=91838eb1-5852-43eb-90b2-09876f360ab2 00:19:35.543 19:53:30 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:35.543 19:53:30 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:35.543 19:53:30 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:35.543 19:53:30 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:35.543 19:53:30 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:35.543 19:53:30 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:19:35.543 19:53:30 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:35.543 19:53:30 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:35.543 19:53:30 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:35.543 19:53:30 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.543 19:53:30 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.543 19:53:30 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.543 19:53:30 keyring_file -- paths/export.sh@5 -- # export PATH 00:19:35.543 19:53:30 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.543 19:53:30 keyring_file -- nvmf/common.sh@51 -- # : 0 00:19:35.543 19:53:30 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:35.543 19:53:30 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:35.543 19:53:30 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:35.543 19:53:30 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:35.543 19:53:30 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:35.543 19:53:30 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:35.543 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:35.543 19:53:30 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:35.543 19:53:30 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:35.543 19:53:30 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:35.543 19:53:30 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:19:35.543 19:53:30 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:19:35.543 19:53:30 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:19:35.543 19:53:30 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:19:35.543 19:53:30 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:19:35.543 19:53:30 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:19:35.543 19:53:30 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:19:35.543 19:53:30 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:19:35.543 19:53:30 keyring_file -- keyring/common.sh@17 -- # name=key0 00:19:35.543 19:53:30 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:19:35.543 19:53:30 keyring_file -- keyring/common.sh@17 -- # digest=0 00:19:35.543 19:53:30 keyring_file -- keyring/common.sh@18 -- # mktemp 00:19:35.543 19:53:30 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.GsJYkKg8KN 00:19:35.543 19:53:30 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:19:35.543 19:53:30 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:19:35.543 19:53:30 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:19:35.543 19:53:30 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:35.543 19:53:30 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:19:35.543 19:53:30 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:19:35.543 19:53:30 keyring_file -- nvmf/common.sh@733 -- # python - 00:19:35.543 19:53:30 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.GsJYkKg8KN 00:19:35.543 19:53:30 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.GsJYkKg8KN 00:19:35.543 19:53:30 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.GsJYkKg8KN 00:19:35.543 19:53:30 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:19:35.543 19:53:30 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:19:35.543 19:53:30 keyring_file -- keyring/common.sh@17 -- # name=key1 00:19:35.543 19:53:30 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:19:35.543 19:53:30 keyring_file -- keyring/common.sh@17 -- # digest=0 00:19:35.543 19:53:30 keyring_file -- keyring/common.sh@18 -- # mktemp 00:19:35.543 19:53:30 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.sMXW5Wkziz 00:19:35.543 19:53:30 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:19:35.543 19:53:30 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:19:35.543 19:53:30 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:19:35.543 19:53:30 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:35.544 19:53:30 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:19:35.544 19:53:30 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:19:35.544 19:53:30 keyring_file -- nvmf/common.sh@733 -- # python - 00:19:35.840 19:53:30 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.sMXW5Wkziz 00:19:35.840 19:53:30 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.sMXW5Wkziz 00:19:35.840 19:53:30 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.sMXW5Wkziz 00:19:35.840 19:53:30 keyring_file -- keyring/file.sh@30 -- # tgtpid=83826 00:19:35.840 19:53:30 keyring_file -- keyring/file.sh@32 -- # waitforlisten 83826 00:19:35.840 19:53:30 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 83826 ']' 00:19:35.840 19:53:30 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:35.840 19:53:30 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:35.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:35.840 19:53:30 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:35.840 19:53:30 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:35.840 19:53:30 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:19:35.840 19:53:30 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:35.840 [2024-11-26 19:53:30.852405] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:19:35.840 [2024-11-26 19:53:30.852464] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83826 ] 00:19:35.840 [2024-11-26 19:53:30.992052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.840 [2024-11-26 19:53:31.026500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.840 [2024-11-26 19:53:31.068119] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:36.773 19:53:31 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:36.773 19:53:31 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:19:36.773 19:53:31 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:19:36.773 19:53:31 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.773 19:53:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:19:36.773 [2024-11-26 19:53:31.705815] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:36.773 null0 00:19:36.773 [2024-11-26 19:53:31.737785] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:36.773 [2024-11-26 19:53:31.738010] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:19:36.773 19:53:31 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.773 19:53:31 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:19:36.773 19:53:31 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:19:36.773 19:53:31 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:19:36.773 19:53:31 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:36.773 19:53:31 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:36.773 19:53:31 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:36.773 19:53:31 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:36.773 19:53:31 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:19:36.773 19:53:31 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.773 19:53:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:19:36.773 [2024-11-26 19:53:31.765763] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:19:36.773 request: 00:19:36.773 { 00:19:36.773 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:19:36.773 "secure_channel": false, 00:19:36.773 "listen_address": { 00:19:36.773 "trtype": "tcp", 00:19:36.773 "traddr": "127.0.0.1", 00:19:36.773 "trsvcid": "4420" 00:19:36.773 }, 00:19:36.773 "method": "nvmf_subsystem_add_listener", 00:19:36.773 "req_id": 1 00:19:36.773 } 00:19:36.773 Got JSON-RPC error response 00:19:36.773 response: 00:19:36.773 { 00:19:36.773 "code": -32602, 00:19:36.773 "message": "Invalid parameters" 00:19:36.773 } 00:19:36.773 19:53:31 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:36.773 19:53:31 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:19:36.773 19:53:31 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:36.773 19:53:31 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:36.773 19:53:31 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:36.773 19:53:31 keyring_file -- keyring/file.sh@47 -- # bperfpid=83839 00:19:36.773 19:53:31 keyring_file -- keyring/file.sh@49 -- # waitforlisten 83839 /var/tmp/bperf.sock 00:19:36.773 19:53:31 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:19:36.773 19:53:31 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 83839 ']' 00:19:36.773 19:53:31 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:36.773 19:53:31 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:36.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:36.773 19:53:31 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:36.773 19:53:31 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:36.773 19:53:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:19:36.773 [2024-11-26 19:53:31.806562] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:19:36.773 [2024-11-26 19:53:31.806618] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83839 ] 00:19:36.773 [2024-11-26 19:53:31.941543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.773 [2024-11-26 19:53:31.977030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:36.773 [2024-11-26 19:53:32.007321] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:37.707 19:53:32 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:37.707 19:53:32 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:19:37.707 19:53:32 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.GsJYkKg8KN 00:19:37.707 19:53:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.GsJYkKg8KN 00:19:37.707 19:53:32 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.sMXW5Wkziz 00:19:37.707 19:53:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.sMXW5Wkziz 00:19:37.966 19:53:33 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:19:37.966 19:53:33 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:19:37.966 19:53:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:37.966 19:53:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:37.966 19:53:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:19:38.224 19:53:33 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.GsJYkKg8KN == \/\t\m\p\/\t\m\p\.\G\s\J\Y\k\K\g\8\K\N ]] 00:19:38.224 19:53:33 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:19:38.224 19:53:33 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:19:38.224 19:53:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:38.224 19:53:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:38.224 19:53:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:19:38.482 19:53:33 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.sMXW5Wkziz == \/\t\m\p\/\t\m\p\.\s\M\X\W\5\W\k\z\i\z ]] 00:19:38.482 19:53:33 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:19:38.482 19:53:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:19:38.482 19:53:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:19:38.482 19:53:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:38.482 19:53:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:19:38.482 19:53:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:38.482 19:53:33 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:19:38.482 19:53:33 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:19:38.482 19:53:33 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:19:38.482 19:53:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:19:38.482 19:53:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:19:38.482 19:53:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:38.482 19:53:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:38.740 19:53:33 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:19:38.740 19:53:33 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:19:38.740 19:53:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:19:38.998 [2024-11-26 19:53:34.091331] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:38.999 nvme0n1 00:19:38.999 19:53:34 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:19:38.999 19:53:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:19:38.999 19:53:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:19:38.999 19:53:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:38.999 19:53:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:38.999 19:53:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:19:39.257 19:53:34 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:19:39.257 19:53:34 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:19:39.257 19:53:34 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:19:39.257 19:53:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:19:39.257 19:53:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:19:39.257 19:53:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:39.257 19:53:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:39.516 19:53:34 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:19:39.516 19:53:34 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:39.516 Running I/O for 1 seconds... 00:19:40.450 20145.00 IOPS, 78.69 MiB/s 00:19:40.450 Latency(us) 00:19:40.450 [2024-11-26T19:53:35.697Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.450 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:19:40.450 nvme0n1 : 1.00 20179.63 78.83 0.00 0.00 6327.61 3138.17 10838.65 00:19:40.450 [2024-11-26T19:53:35.697Z] =================================================================================================================== 00:19:40.450 [2024-11-26T19:53:35.697Z] Total : 20179.63 78.83 0.00 0.00 6327.61 3138.17 10838.65 00:19:40.450 { 00:19:40.450 "results": [ 00:19:40.450 { 00:19:40.450 "job": "nvme0n1", 00:19:40.450 "core_mask": "0x2", 00:19:40.450 "workload": "randrw", 00:19:40.450 "percentage": 50, 00:19:40.450 "status": "finished", 00:19:40.450 "queue_depth": 128, 00:19:40.450 "io_size": 4096, 00:19:40.450 "runtime": 1.004627, 00:19:40.450 "iops": 20179.628857277377, 00:19:40.450 "mibps": 78.82667522373976, 00:19:40.450 "io_failed": 0, 00:19:40.450 "io_timeout": 0, 00:19:40.450 "avg_latency_us": 6327.610599015743, 00:19:40.450 "min_latency_us": 3138.166153846154, 00:19:40.450 "max_latency_us": 10838.646153846154 00:19:40.450 } 00:19:40.450 ], 00:19:40.450 "core_count": 1 00:19:40.450 } 00:19:40.450 19:53:35 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:19:40.450 19:53:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:19:40.708 19:53:35 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:19:40.708 19:53:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:19:40.708 19:53:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:19:40.708 19:53:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:40.708 19:53:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:40.708 19:53:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:19:40.966 19:53:36 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:19:40.966 19:53:36 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:19:40.966 19:53:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:19:40.966 19:53:36 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:19:40.966 19:53:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:40.966 19:53:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:40.966 19:53:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:19:41.225 19:53:36 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:19:41.225 19:53:36 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:19:41.225 19:53:36 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:19:41.225 19:53:36 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:19:41.225 19:53:36 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:19:41.225 19:53:36 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:41.225 19:53:36 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:19:41.225 19:53:36 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:41.225 19:53:36 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:19:41.225 19:53:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:19:41.225 [2024-11-26 19:53:36.451335] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:41.225 [2024-11-26 19:53:36.452091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b85d0 (107): Transport endpoint is not connected 00:19:41.225 [2024-11-26 19:53:36.453086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b85d0 (9): Bad file descriptor 00:19:41.225 [2024-11-26 19:53:36.454084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:19:41.225 [2024-11-26 19:53:36.454424] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:19:41.225 [2024-11-26 19:53:36.454481] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:19:41.225 [2024-11-26 19:53:36.454547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:19:41.225 request: 00:19:41.225 { 00:19:41.225 "name": "nvme0", 00:19:41.225 "trtype": "tcp", 00:19:41.225 "traddr": "127.0.0.1", 00:19:41.225 "adrfam": "ipv4", 00:19:41.225 "trsvcid": "4420", 00:19:41.225 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:41.225 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:41.225 "prchk_reftag": false, 00:19:41.225 "prchk_guard": false, 00:19:41.225 "hdgst": false, 00:19:41.225 "ddgst": false, 00:19:41.225 "psk": "key1", 00:19:41.225 "allow_unrecognized_csi": false, 00:19:41.225 "method": "bdev_nvme_attach_controller", 00:19:41.225 "req_id": 1 00:19:41.225 } 00:19:41.225 Got JSON-RPC error response 00:19:41.225 response: 00:19:41.225 { 00:19:41.225 "code": -5, 00:19:41.225 "message": "Input/output error" 00:19:41.225 } 00:19:41.484 19:53:36 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:19:41.484 19:53:36 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:41.484 19:53:36 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:41.484 19:53:36 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:41.484 19:53:36 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:19:41.484 19:53:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:19:41.484 19:53:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:19:41.484 19:53:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:19:41.484 19:53:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:41.484 19:53:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:41.484 19:53:36 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:19:41.484 19:53:36 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:19:41.484 19:53:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:19:41.484 19:53:36 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:19:41.484 19:53:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:41.484 19:53:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:41.484 19:53:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:19:41.742 19:53:36 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:19:41.742 19:53:36 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:19:41.742 19:53:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:19:42.052 19:53:37 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:19:42.052 19:53:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:19:42.311 19:53:37 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:19:42.311 19:53:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:42.311 19:53:37 keyring_file -- keyring/file.sh@78 -- # jq length 00:19:42.311 19:53:37 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:19:42.311 19:53:37 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.GsJYkKg8KN 00:19:42.311 19:53:37 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.GsJYkKg8KN 00:19:42.311 19:53:37 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:19:42.311 19:53:37 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.GsJYkKg8KN 00:19:42.311 19:53:37 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:19:42.311 19:53:37 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:42.311 19:53:37 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:19:42.311 19:53:37 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:42.311 19:53:37 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.GsJYkKg8KN 00:19:42.311 19:53:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.GsJYkKg8KN 00:19:42.569 [2024-11-26 19:53:37.692641] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.GsJYkKg8KN': 0100660 00:19:42.569 [2024-11-26 19:53:37.692751] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:42.569 request: 00:19:42.569 { 00:19:42.569 "name": "key0", 00:19:42.569 "path": "/tmp/tmp.GsJYkKg8KN", 00:19:42.569 "method": "keyring_file_add_key", 00:19:42.569 "req_id": 1 00:19:42.569 } 00:19:42.569 Got JSON-RPC error response 00:19:42.569 response: 00:19:42.569 { 00:19:42.569 "code": -1, 00:19:42.569 "message": "Operation not permitted" 00:19:42.569 } 00:19:42.569 19:53:37 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:19:42.569 19:53:37 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:42.569 19:53:37 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:42.569 19:53:37 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:42.569 19:53:37 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.GsJYkKg8KN 00:19:42.569 19:53:37 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.GsJYkKg8KN 00:19:42.569 19:53:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.GsJYkKg8KN 00:19:42.829 19:53:37 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.GsJYkKg8KN 00:19:42.829 19:53:37 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:19:42.829 19:53:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:19:42.829 19:53:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:19:42.829 19:53:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:42.829 19:53:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:19:42.829 19:53:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:43.088 19:53:38 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:19:43.088 19:53:38 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:19:43.088 19:53:38 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:19:43.088 19:53:38 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:19:43.088 19:53:38 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:19:43.088 19:53:38 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:43.088 19:53:38 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:19:43.088 19:53:38 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:43.088 19:53:38 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:19:43.088 19:53:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:19:43.088 [2024-11-26 19:53:38.320757] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.GsJYkKg8KN': No such file or directory 00:19:43.088 [2024-11-26 19:53:38.320787] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:19:43.088 [2024-11-26 19:53:38.320800] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:19:43.088 [2024-11-26 19:53:38.320805] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:19:43.088 [2024-11-26 19:53:38.320809] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:19:43.088 [2024-11-26 19:53:38.320813] bdev_nvme.c:6769:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:19:43.088 request: 00:19:43.088 { 00:19:43.088 "name": "nvme0", 00:19:43.088 "trtype": "tcp", 00:19:43.088 "traddr": "127.0.0.1", 00:19:43.088 "adrfam": "ipv4", 00:19:43.088 "trsvcid": "4420", 00:19:43.088 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:43.088 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:43.088 "prchk_reftag": false, 00:19:43.088 "prchk_guard": false, 00:19:43.088 "hdgst": false, 00:19:43.088 "ddgst": false, 00:19:43.088 "psk": "key0", 00:19:43.088 "allow_unrecognized_csi": false, 00:19:43.088 "method": "bdev_nvme_attach_controller", 00:19:43.088 "req_id": 1 00:19:43.088 } 00:19:43.088 Got JSON-RPC error response 00:19:43.088 response: 00:19:43.088 { 00:19:43.088 "code": -19, 00:19:43.088 "message": "No such device" 00:19:43.088 } 00:19:43.347 19:53:38 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:19:43.347 19:53:38 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:43.347 19:53:38 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:43.347 19:53:38 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:43.347 19:53:38 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:19:43.347 19:53:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:19:43.347 19:53:38 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:19:43.347 19:53:38 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:19:43.347 19:53:38 keyring_file -- keyring/common.sh@17 -- # name=key0 00:19:43.347 19:53:38 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:19:43.347 19:53:38 keyring_file -- keyring/common.sh@17 -- # digest=0 00:19:43.347 19:53:38 keyring_file -- keyring/common.sh@18 -- # mktemp 00:19:43.347 19:53:38 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.CN690URBjV 00:19:43.347 19:53:38 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:19:43.347 19:53:38 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:19:43.347 19:53:38 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:19:43.347 19:53:38 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:43.347 19:53:38 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:19:43.347 19:53:38 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:19:43.347 19:53:38 keyring_file -- nvmf/common.sh@733 -- # python - 00:19:43.605 19:53:38 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.CN690URBjV 00:19:43.605 19:53:38 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.CN690URBjV 00:19:43.605 19:53:38 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.CN690URBjV 00:19:43.605 19:53:38 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.CN690URBjV 00:19:43.605 19:53:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.CN690URBjV 00:19:43.605 19:53:38 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:19:43.605 19:53:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:19:43.863 nvme0n1 00:19:43.863 19:53:39 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:19:43.863 19:53:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:19:43.863 19:53:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:19:43.863 19:53:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:43.863 19:53:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:43.863 19:53:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:19:44.121 19:53:39 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:19:44.121 19:53:39 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:19:44.121 19:53:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:19:44.379 19:53:39 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:19:44.379 19:53:39 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:19:44.379 19:53:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:44.379 19:53:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:44.379 19:53:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:19:44.637 19:53:39 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:19:44.637 19:53:39 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:19:44.637 19:53:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:19:44.637 19:53:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:19:44.637 19:53:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:44.637 19:53:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:44.637 19:53:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:19:44.896 19:53:39 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:19:44.896 19:53:39 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:19:44.896 19:53:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:19:44.896 19:53:40 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:19:44.896 19:53:40 keyring_file -- keyring/file.sh@105 -- # jq length 00:19:44.896 19:53:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:45.154 19:53:40 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:19:45.154 19:53:40 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.CN690URBjV 00:19:45.154 19:53:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.CN690URBjV 00:19:45.411 19:53:40 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.sMXW5Wkziz 00:19:45.412 19:53:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.sMXW5Wkziz 00:19:45.669 19:53:40 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:19:45.669 19:53:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:19:45.927 nvme0n1 00:19:45.927 19:53:40 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:19:45.927 19:53:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:19:46.186 19:53:41 keyring_file -- keyring/file.sh@113 -- # config='{ 00:19:46.186 "subsystems": [ 00:19:46.186 { 00:19:46.186 "subsystem": "keyring", 00:19:46.186 "config": [ 00:19:46.186 { 00:19:46.186 "method": "keyring_file_add_key", 00:19:46.186 "params": { 00:19:46.186 "name": "key0", 00:19:46.186 "path": "/tmp/tmp.CN690URBjV" 00:19:46.186 } 00:19:46.186 }, 00:19:46.186 { 00:19:46.186 "method": "keyring_file_add_key", 00:19:46.186 "params": { 00:19:46.186 "name": "key1", 00:19:46.186 "path": "/tmp/tmp.sMXW5Wkziz" 00:19:46.186 } 00:19:46.186 } 00:19:46.186 ] 00:19:46.186 }, 00:19:46.186 { 00:19:46.186 "subsystem": "iobuf", 00:19:46.186 "config": [ 00:19:46.186 { 00:19:46.186 "method": "iobuf_set_options", 00:19:46.186 "params": { 00:19:46.186 "small_pool_count": 8192, 00:19:46.186 "large_pool_count": 1024, 00:19:46.186 "small_bufsize": 8192, 00:19:46.186 "large_bufsize": 135168, 00:19:46.186 "enable_numa": false 00:19:46.186 } 00:19:46.186 } 00:19:46.186 ] 00:19:46.186 }, 00:19:46.186 { 00:19:46.186 "subsystem": "sock", 00:19:46.186 "config": [ 00:19:46.186 { 00:19:46.186 "method": "sock_set_default_impl", 00:19:46.186 "params": { 00:19:46.186 "impl_name": "uring" 00:19:46.186 } 00:19:46.186 }, 00:19:46.186 { 00:19:46.186 "method": "sock_impl_set_options", 00:19:46.186 "params": { 00:19:46.186 "impl_name": "ssl", 00:19:46.186 "recv_buf_size": 4096, 00:19:46.186 "send_buf_size": 4096, 00:19:46.186 "enable_recv_pipe": true, 00:19:46.186 "enable_quickack": false, 00:19:46.186 "enable_placement_id": 0, 00:19:46.186 "enable_zerocopy_send_server": true, 00:19:46.186 "enable_zerocopy_send_client": false, 00:19:46.186 "zerocopy_threshold": 0, 00:19:46.186 "tls_version": 0, 00:19:46.186 "enable_ktls": false 00:19:46.186 } 00:19:46.186 }, 00:19:46.186 { 00:19:46.186 "method": "sock_impl_set_options", 00:19:46.186 "params": { 00:19:46.186 "impl_name": "posix", 00:19:46.186 "recv_buf_size": 2097152, 00:19:46.186 "send_buf_size": 2097152, 00:19:46.186 "enable_recv_pipe": true, 00:19:46.186 "enable_quickack": false, 00:19:46.186 "enable_placement_id": 0, 00:19:46.186 "enable_zerocopy_send_server": true, 00:19:46.186 "enable_zerocopy_send_client": false, 00:19:46.186 "zerocopy_threshold": 0, 00:19:46.186 "tls_version": 0, 00:19:46.186 "enable_ktls": false 00:19:46.186 } 00:19:46.186 }, 00:19:46.186 { 00:19:46.186 "method": "sock_impl_set_options", 00:19:46.186 "params": { 00:19:46.186 "impl_name": "uring", 00:19:46.186 "recv_buf_size": 2097152, 00:19:46.186 "send_buf_size": 2097152, 00:19:46.186 "enable_recv_pipe": true, 00:19:46.186 "enable_quickack": false, 00:19:46.186 "enable_placement_id": 0, 00:19:46.186 "enable_zerocopy_send_server": false, 00:19:46.186 "enable_zerocopy_send_client": false, 00:19:46.186 "zerocopy_threshold": 0, 00:19:46.186 "tls_version": 0, 00:19:46.186 "enable_ktls": false 00:19:46.186 } 00:19:46.186 } 00:19:46.186 ] 00:19:46.186 }, 00:19:46.186 { 00:19:46.186 "subsystem": "vmd", 00:19:46.186 "config": [] 00:19:46.186 }, 00:19:46.186 { 00:19:46.186 "subsystem": "accel", 00:19:46.186 "config": [ 00:19:46.186 { 00:19:46.186 "method": "accel_set_options", 00:19:46.186 "params": { 00:19:46.186 "small_cache_size": 128, 00:19:46.186 "large_cache_size": 16, 00:19:46.186 "task_count": 2048, 00:19:46.186 "sequence_count": 2048, 00:19:46.186 "buf_count": 2048 00:19:46.186 } 00:19:46.186 } 00:19:46.186 ] 00:19:46.186 }, 00:19:46.186 { 00:19:46.186 "subsystem": "bdev", 00:19:46.186 "config": [ 00:19:46.186 { 00:19:46.186 "method": "bdev_set_options", 00:19:46.186 "params": { 00:19:46.186 "bdev_io_pool_size": 65535, 00:19:46.186 "bdev_io_cache_size": 256, 00:19:46.186 "bdev_auto_examine": true, 00:19:46.186 "iobuf_small_cache_size": 128, 00:19:46.186 "iobuf_large_cache_size": 16 00:19:46.186 } 00:19:46.186 }, 00:19:46.186 { 00:19:46.186 "method": "bdev_raid_set_options", 00:19:46.186 "params": { 00:19:46.186 "process_window_size_kb": 1024, 00:19:46.186 "process_max_bandwidth_mb_sec": 0 00:19:46.186 } 00:19:46.186 }, 00:19:46.186 { 00:19:46.186 "method": "bdev_iscsi_set_options", 00:19:46.186 "params": { 00:19:46.186 "timeout_sec": 30 00:19:46.186 } 00:19:46.186 }, 00:19:46.186 { 00:19:46.186 "method": "bdev_nvme_set_options", 00:19:46.186 "params": { 00:19:46.186 "action_on_timeout": "none", 00:19:46.186 "timeout_us": 0, 00:19:46.186 "timeout_admin_us": 0, 00:19:46.186 "keep_alive_timeout_ms": 10000, 00:19:46.186 "arbitration_burst": 0, 00:19:46.186 "low_priority_weight": 0, 00:19:46.186 "medium_priority_weight": 0, 00:19:46.186 "high_priority_weight": 0, 00:19:46.186 "nvme_adminq_poll_period_us": 10000, 00:19:46.186 "nvme_ioq_poll_period_us": 0, 00:19:46.186 "io_queue_requests": 512, 00:19:46.186 "delay_cmd_submit": true, 00:19:46.186 "transport_retry_count": 4, 00:19:46.186 "bdev_retry_count": 3, 00:19:46.186 "transport_ack_timeout": 0, 00:19:46.186 "ctrlr_loss_timeout_sec": 0, 00:19:46.186 "reconnect_delay_sec": 0, 00:19:46.186 "fast_io_fail_timeout_sec": 0, 00:19:46.186 "disable_auto_failback": false, 00:19:46.187 "generate_uuids": false, 00:19:46.187 "transport_tos": 0, 00:19:46.187 "nvme_error_stat": false, 00:19:46.187 "rdma_srq_size": 0, 00:19:46.187 "io_path_stat": false, 00:19:46.187 "allow_accel_sequence": false, 00:19:46.187 "rdma_max_cq_size": 0, 00:19:46.187 "rdma_cm_event_timeout_ms": 0, 00:19:46.187 "dhchap_digests": [ 00:19:46.187 "sha256", 00:19:46.187 "sha384", 00:19:46.187 "sha512" 00:19:46.187 ], 00:19:46.187 "dhchap_dhgroups": [ 00:19:46.187 "null", 00:19:46.187 "ffdhe2048", 00:19:46.187 "ffdhe3072", 00:19:46.187 "ffdhe4096", 00:19:46.187 "ffdhe6144", 00:19:46.187 "ffdhe8192" 00:19:46.187 ] 00:19:46.187 } 00:19:46.187 }, 00:19:46.187 { 00:19:46.187 "method": "bdev_nvme_attach_controller", 00:19:46.187 "params": { 00:19:46.187 "name": "nvme0", 00:19:46.187 "trtype": "TCP", 00:19:46.187 "adrfam": "IPv4", 00:19:46.187 "traddr": "127.0.0.1", 00:19:46.187 "trsvcid": "4420", 00:19:46.187 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:46.187 "prchk_reftag": false, 00:19:46.187 "prchk_guard": false, 00:19:46.187 "ctrlr_loss_timeout_sec": 0, 00:19:46.187 "reconnect_delay_sec": 0, 00:19:46.187 "fast_io_fail_timeout_sec": 0, 00:19:46.187 "psk": "key0", 00:19:46.187 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:46.187 "hdgst": false, 00:19:46.187 "ddgst": false, 00:19:46.187 "multipath": "multipath" 00:19:46.187 } 00:19:46.187 }, 00:19:46.187 { 00:19:46.187 "method": "bdev_nvme_set_hotplug", 00:19:46.187 "params": { 00:19:46.187 "period_us": 100000, 00:19:46.187 "enable": false 00:19:46.187 } 00:19:46.187 }, 00:19:46.187 { 00:19:46.187 "method": "bdev_wait_for_examine" 00:19:46.187 } 00:19:46.187 ] 00:19:46.187 }, 00:19:46.187 { 00:19:46.187 "subsystem": "nbd", 00:19:46.187 "config": [] 00:19:46.187 } 00:19:46.187 ] 00:19:46.187 }' 00:19:46.187 19:53:41 keyring_file -- keyring/file.sh@115 -- # killprocess 83839 00:19:46.187 19:53:41 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 83839 ']' 00:19:46.187 19:53:41 keyring_file -- common/autotest_common.sh@958 -- # kill -0 83839 00:19:46.187 19:53:41 keyring_file -- common/autotest_common.sh@959 -- # uname 00:19:46.187 19:53:41 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:46.187 19:53:41 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83839 00:19:46.187 killing process with pid 83839 00:19:46.187 Received shutdown signal, test time was about 1.000000 seconds 00:19:46.187 00:19:46.187 Latency(us) 00:19:46.187 [2024-11-26T19:53:41.434Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:46.187 [2024-11-26T19:53:41.434Z] =================================================================================================================== 00:19:46.187 [2024-11-26T19:53:41.434Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:46.187 19:53:41 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:46.187 19:53:41 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:46.187 19:53:41 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83839' 00:19:46.187 19:53:41 keyring_file -- common/autotest_common.sh@973 -- # kill 83839 00:19:46.187 19:53:41 keyring_file -- common/autotest_common.sh@978 -- # wait 83839 00:19:46.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:46.187 19:53:41 keyring_file -- keyring/file.sh@118 -- # bperfpid=84073 00:19:46.187 19:53:41 keyring_file -- keyring/file.sh@120 -- # waitforlisten 84073 /var/tmp/bperf.sock 00:19:46.187 19:53:41 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 84073 ']' 00:19:46.187 19:53:41 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:46.187 19:53:41 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:46.187 19:53:41 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:46.187 19:53:41 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:46.187 19:53:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:19:46.187 19:53:41 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:19:46.187 19:53:41 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:19:46.187 "subsystems": [ 00:19:46.187 { 00:19:46.187 "subsystem": "keyring", 00:19:46.187 "config": [ 00:19:46.187 { 00:19:46.187 "method": "keyring_file_add_key", 00:19:46.187 "params": { 00:19:46.187 "name": "key0", 00:19:46.187 "path": "/tmp/tmp.CN690URBjV" 00:19:46.187 } 00:19:46.187 }, 00:19:46.187 { 00:19:46.187 "method": "keyring_file_add_key", 00:19:46.187 "params": { 00:19:46.187 "name": "key1", 00:19:46.187 "path": "/tmp/tmp.sMXW5Wkziz" 00:19:46.187 } 00:19:46.187 } 00:19:46.187 ] 00:19:46.187 }, 00:19:46.187 { 00:19:46.187 "subsystem": "iobuf", 00:19:46.187 "config": [ 00:19:46.187 { 00:19:46.187 "method": "iobuf_set_options", 00:19:46.187 "params": { 00:19:46.187 "small_pool_count": 8192, 00:19:46.187 "large_pool_count": 1024, 00:19:46.187 "small_bufsize": 8192, 00:19:46.187 "large_bufsize": 135168, 00:19:46.187 "enable_numa": false 00:19:46.187 } 00:19:46.187 } 00:19:46.187 ] 00:19:46.187 }, 00:19:46.187 { 00:19:46.187 "subsystem": "sock", 00:19:46.187 "config": [ 00:19:46.187 { 00:19:46.187 "method": "sock_set_default_impl", 00:19:46.187 "params": { 00:19:46.187 "impl_name": "uring" 00:19:46.187 } 00:19:46.187 }, 00:19:46.187 { 00:19:46.187 "method": "sock_impl_set_options", 00:19:46.187 "params": { 00:19:46.187 "impl_name": "ssl", 00:19:46.187 "recv_buf_size": 4096, 00:19:46.187 "send_buf_size": 4096, 00:19:46.187 "enable_recv_pipe": true, 00:19:46.187 "enable_quickack": false, 00:19:46.187 "enable_placement_id": 0, 00:19:46.187 "enable_zerocopy_send_server": true, 00:19:46.187 "enable_zerocopy_send_client": false, 00:19:46.187 "zerocopy_threshold": 0, 00:19:46.187 "tls_version": 0, 00:19:46.187 "enable_ktls": false 00:19:46.187 } 00:19:46.187 }, 00:19:46.187 { 00:19:46.187 "method": "sock_impl_set_options", 00:19:46.187 "params": { 00:19:46.187 "impl_name": "posix", 00:19:46.187 "recv_buf_size": 2097152, 00:19:46.187 "send_buf_size": 2097152, 00:19:46.187 "enable_recv_pipe": true, 00:19:46.187 "enable_quickack": false, 00:19:46.187 "enable_placement_id": 0, 00:19:46.187 "enable_zerocopy_send_server": true, 00:19:46.187 "enable_zerocopy_send_client": false, 00:19:46.187 "zerocopy_threshold": 0, 00:19:46.187 "tls_version": 0, 00:19:46.187 "enable_ktls": false 00:19:46.187 } 00:19:46.187 }, 00:19:46.187 { 00:19:46.187 "method": "sock_impl_set_options", 00:19:46.187 "params": { 00:19:46.187 "impl_name": "uring", 00:19:46.187 "recv_buf_size": 2097152, 00:19:46.187 "send_buf_size": 2097152, 00:19:46.187 "enable_recv_pipe": true, 00:19:46.187 "enable_quickack": false, 00:19:46.187 "enable_placement_id": 0, 00:19:46.187 "enable_zerocopy_send_server": false, 00:19:46.187 "enable_zerocopy_send_client": false, 00:19:46.187 "zerocopy_threshold": 0, 00:19:46.187 "tls_version": 0, 00:19:46.187 "enable_ktls": false 00:19:46.187 } 00:19:46.187 } 00:19:46.187 ] 00:19:46.187 }, 00:19:46.187 { 00:19:46.187 "subsystem": "vmd", 00:19:46.187 "config": [] 00:19:46.187 }, 00:19:46.187 { 00:19:46.187 "subsystem": "accel", 00:19:46.187 "config": [ 00:19:46.187 { 00:19:46.187 "method": "accel_set_options", 00:19:46.187 "params": { 00:19:46.187 "small_cache_size": 128, 00:19:46.187 "large_cache_size": 16, 00:19:46.187 "task_count": 2048, 00:19:46.187 "sequence_count": 2048, 00:19:46.187 "buf_count": 2048 00:19:46.187 } 00:19:46.187 } 00:19:46.187 ] 00:19:46.187 }, 00:19:46.187 { 00:19:46.187 "subsystem": "bdev", 00:19:46.187 "config": [ 00:19:46.187 { 00:19:46.187 "method": "bdev_set_options", 00:19:46.187 "params": { 00:19:46.187 "bdev_io_pool_size": 65535, 00:19:46.187 "bdev_io_cache_size": 256, 00:19:46.187 "bdev_auto_examine": true, 00:19:46.187 "iobuf_small_cache_size": 128, 00:19:46.187 "iobuf_large_cache_size": 16 00:19:46.187 } 00:19:46.187 }, 00:19:46.188 { 00:19:46.188 "method": "bdev_raid_set_options", 00:19:46.188 "params": { 00:19:46.188 "process_window_size_kb": 1024, 00:19:46.188 "process_max_bandwidth_mb_sec": 0 00:19:46.188 } 00:19:46.188 }, 00:19:46.188 { 00:19:46.188 "method": "bdev_iscsi_set_options", 00:19:46.188 "params": { 00:19:46.188 "timeout_sec": 30 00:19:46.188 } 00:19:46.188 }, 00:19:46.188 { 00:19:46.188 "method": "bdev_nvme_set_options", 00:19:46.188 "params": { 00:19:46.188 "action_on_timeout": "none", 00:19:46.188 "timeout_us": 0, 00:19:46.188 "timeout_admin_us": 0, 00:19:46.188 "keep_alive_timeout_ms": 10000, 00:19:46.188 "arbitration_burst": 0, 00:19:46.188 "low_priority_weight": 0, 00:19:46.188 "medium_priority_weight": 0, 00:19:46.188 "high_priority_weight": 0, 00:19:46.188 "nvme_adminq_poll_period_us": 10000, 00:19:46.188 "nvme_ioq_poll_period_us": 0, 00:19:46.188 "io_queue_requests": 512, 00:19:46.188 "delay_cmd_submit": true, 00:19:46.188 "transport_retry_count": 4, 00:19:46.188 "bdev_retry_count": 3, 00:19:46.188 "transport_ack_timeout": 0, 00:19:46.188 "ctrlr_loss_timeout_sec": 0, 00:19:46.188 "reconnect_delay_sec": 0, 00:19:46.188 "fast_io_fail_timeout_sec": 0, 00:19:46.188 "disable_auto_failback": false, 00:19:46.188 "generate_uuids": false, 00:19:46.188 "transport_tos": 0, 00:19:46.188 "nvme_error_stat": false, 00:19:46.188 "rdma_srq_size": 0, 00:19:46.188 "io_path_stat": false, 00:19:46.188 "allow_accel_sequence": false, 00:19:46.188 "rdma_max_cq_size": 0, 00:19:46.188 "rdma_cm_event_timeout_ms": 0, 00:19:46.188 "dhchap_digests": [ 00:19:46.188 "sha256", 00:19:46.188 "sha384", 00:19:46.188 "sha512" 00:19:46.188 ], 00:19:46.188 "dhchap_dhgroups": [ 00:19:46.188 "null", 00:19:46.188 "ffdhe2048", 00:19:46.188 "ffdhe3072", 00:19:46.188 "ffdhe4096", 00:19:46.188 "ffdhe6144", 00:19:46.188 "ffdhe8192" 00:19:46.188 ] 00:19:46.188 } 00:19:46.188 }, 00:19:46.188 { 00:19:46.188 "method": "bdev_nvme_attach_controller", 00:19:46.188 "params": { 00:19:46.188 "name": "nvme0", 00:19:46.188 "trtype": "TCP", 00:19:46.188 "adrfam": "IPv4", 00:19:46.188 "traddr": "127.0.0.1", 00:19:46.188 "trsvcid": "4420", 00:19:46.188 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:46.188 "prchk_reftag": false, 00:19:46.188 "prchk_guard": false, 00:19:46.188 "ctrlr_loss_timeout_sec": 0, 00:19:46.188 "reconnect_delay_sec": 0, 00:19:46.188 "fast_io_fail_timeout_sec": 0, 00:19:46.188 "psk": "key0", 00:19:46.188 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:46.188 "hdgst": false, 00:19:46.188 "ddgst": false, 00:19:46.188 "multipath": "multipath" 00:19:46.188 } 00:19:46.188 }, 00:19:46.188 { 00:19:46.188 "method": "bdev_nvme_set_hotplug", 00:19:46.188 "params": { 00:19:46.188 "period_us": 100000, 00:19:46.188 "enable": false 00:19:46.188 } 00:19:46.188 }, 00:19:46.188 { 00:19:46.188 "method": "bdev_wait_for_examine" 00:19:46.188 } 00:19:46.188 ] 00:19:46.188 }, 00:19:46.188 { 00:19:46.188 "subsystem": "nbd", 00:19:46.188 "config": [] 00:19:46.188 } 00:19:46.188 ] 00:19:46.188 }' 00:19:46.188 [2024-11-26 19:53:41.429893] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:19:46.188 [2024-11-26 19:53:41.429949] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84073 ] 00:19:46.447 [2024-11-26 19:53:41.567137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.447 [2024-11-26 19:53:41.597971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:46.705 [2024-11-26 19:53:41.707545] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:46.705 [2024-11-26 19:53:41.750567] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:47.272 19:53:42 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:47.272 19:53:42 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:19:47.272 19:53:42 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:19:47.272 19:53:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:47.272 19:53:42 keyring_file -- keyring/file.sh@121 -- # jq length 00:19:47.272 19:53:42 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:19:47.272 19:53:42 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:19:47.272 19:53:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:19:47.272 19:53:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:47.272 19:53:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:47.272 19:53:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:19:47.272 19:53:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:19:47.531 19:53:42 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:19:47.531 19:53:42 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:19:47.531 19:53:42 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:19:47.531 19:53:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:19:47.531 19:53:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:19:47.531 19:53:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:47.531 19:53:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:47.814 19:53:42 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:19:47.814 19:53:42 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:19:47.814 19:53:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:19:47.814 19:53:42 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:19:48.075 19:53:43 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:19:48.075 19:53:43 keyring_file -- keyring/file.sh@1 -- # cleanup 00:19:48.075 19:53:43 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.CN690URBjV /tmp/tmp.sMXW5Wkziz 00:19:48.075 19:53:43 keyring_file -- keyring/file.sh@20 -- # killprocess 84073 00:19:48.075 19:53:43 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 84073 ']' 00:19:48.075 19:53:43 keyring_file -- common/autotest_common.sh@958 -- # kill -0 84073 00:19:48.075 19:53:43 keyring_file -- common/autotest_common.sh@959 -- # uname 00:19:48.075 19:53:43 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:48.075 19:53:43 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84073 00:19:48.075 19:53:43 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:48.075 19:53:43 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:48.075 killing process with pid 84073 00:19:48.075 19:53:43 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84073' 00:19:48.075 19:53:43 keyring_file -- common/autotest_common.sh@973 -- # kill 84073 00:19:48.075 Received shutdown signal, test time was about 1.000000 seconds 00:19:48.075 00:19:48.075 Latency(us) 00:19:48.075 [2024-11-26T19:53:43.322Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:48.075 [2024-11-26T19:53:43.322Z] =================================================================================================================== 00:19:48.075 [2024-11-26T19:53:43.322Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:48.075 19:53:43 keyring_file -- common/autotest_common.sh@978 -- # wait 84073 00:19:48.075 19:53:43 keyring_file -- keyring/file.sh@21 -- # killprocess 83826 00:19:48.075 19:53:43 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 83826 ']' 00:19:48.075 19:53:43 keyring_file -- common/autotest_common.sh@958 -- # kill -0 83826 00:19:48.075 19:53:43 keyring_file -- common/autotest_common.sh@959 -- # uname 00:19:48.075 19:53:43 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:48.075 19:53:43 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83826 00:19:48.075 19:53:43 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:48.075 19:53:43 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:48.075 killing process with pid 83826 00:19:48.075 19:53:43 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83826' 00:19:48.075 19:53:43 keyring_file -- common/autotest_common.sh@973 -- # kill 83826 00:19:48.075 19:53:43 keyring_file -- common/autotest_common.sh@978 -- # wait 83826 00:19:48.333 00:19:48.333 real 0m12.892s 00:19:48.333 user 0m31.813s 00:19:48.333 sys 0m2.121s 00:19:48.333 ************************************ 00:19:48.333 END TEST keyring_file 00:19:48.333 ************************************ 00:19:48.333 19:53:43 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:48.333 19:53:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:19:48.333 19:53:43 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:19:48.333 19:53:43 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:19:48.333 19:53:43 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:48.333 19:53:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:48.333 19:53:43 -- common/autotest_common.sh@10 -- # set +x 00:19:48.333 ************************************ 00:19:48.333 START TEST keyring_linux 00:19:48.333 ************************************ 00:19:48.333 19:53:43 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:19:48.333 Joined session keyring: 551403829 00:19:48.333 * Looking for test storage... 00:19:48.333 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:19:48.333 19:53:43 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:48.333 19:53:43 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:19:48.333 19:53:43 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:48.592 19:53:43 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:48.592 19:53:43 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:48.592 19:53:43 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:48.592 19:53:43 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:48.592 19:53:43 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:19:48.593 19:53:43 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:19:48.593 19:53:43 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:19:48.593 19:53:43 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:19:48.593 19:53:43 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:19:48.593 19:53:43 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:19:48.593 19:53:43 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:19:48.593 19:53:43 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:48.593 19:53:43 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:19:48.593 19:53:43 keyring_linux -- scripts/common.sh@345 -- # : 1 00:19:48.593 19:53:43 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:48.593 19:53:43 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:48.593 19:53:43 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:19:48.593 19:53:43 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:19:48.593 19:53:43 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:48.593 19:53:43 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:19:48.593 19:53:43 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:19:48.593 19:53:43 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:19:48.593 19:53:43 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:19:48.593 19:53:43 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:48.593 19:53:43 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:19:48.593 19:53:43 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:19:48.593 19:53:43 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:48.593 19:53:43 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:48.593 19:53:43 keyring_linux -- scripts/common.sh@368 -- # return 0 00:19:48.593 19:53:43 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:48.593 19:53:43 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:48.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.593 --rc genhtml_branch_coverage=1 00:19:48.593 --rc genhtml_function_coverage=1 00:19:48.593 --rc genhtml_legend=1 00:19:48.593 --rc geninfo_all_blocks=1 00:19:48.593 --rc geninfo_unexecuted_blocks=1 00:19:48.593 00:19:48.593 ' 00:19:48.593 19:53:43 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:48.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.593 --rc genhtml_branch_coverage=1 00:19:48.593 --rc genhtml_function_coverage=1 00:19:48.593 --rc genhtml_legend=1 00:19:48.593 --rc geninfo_all_blocks=1 00:19:48.593 --rc geninfo_unexecuted_blocks=1 00:19:48.593 00:19:48.593 ' 00:19:48.593 19:53:43 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:48.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.593 --rc genhtml_branch_coverage=1 00:19:48.593 --rc genhtml_function_coverage=1 00:19:48.593 --rc genhtml_legend=1 00:19:48.593 --rc geninfo_all_blocks=1 00:19:48.593 --rc geninfo_unexecuted_blocks=1 00:19:48.593 00:19:48.593 ' 00:19:48.593 19:53:43 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:48.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.593 --rc genhtml_branch_coverage=1 00:19:48.593 --rc genhtml_function_coverage=1 00:19:48.593 --rc genhtml_legend=1 00:19:48.593 --rc geninfo_all_blocks=1 00:19:48.593 --rc geninfo_unexecuted_blocks=1 00:19:48.593 00:19:48.593 ' 00:19:48.593 19:53:43 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:19:48.593 19:53:43 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:48.593 19:53:43 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:19:48.593 19:53:43 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:48.593 19:53:43 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:48.593 19:53:43 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:48.593 19:53:43 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:48.593 19:53:43 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:48.593 19:53:43 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:48.593 19:53:43 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:48.593 19:53:43 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:48.593 19:53:43 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:48.593 19:53:43 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:48.593 19:53:43 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:91838eb1-5852-43eb-90b2-09876f360ab2 00:19:48.593 19:53:43 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=91838eb1-5852-43eb-90b2-09876f360ab2 00:19:48.593 19:53:43 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:48.593 19:53:43 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:48.593 19:53:43 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:48.593 19:53:43 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:48.593 19:53:43 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:48.593 19:53:43 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:19:48.593 19:53:43 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:48.593 19:53:43 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:48.593 19:53:43 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:48.593 19:53:43 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.593 19:53:43 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.593 19:53:43 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.593 19:53:43 keyring_linux -- paths/export.sh@5 -- # export PATH 00:19:48.593 19:53:43 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.593 19:53:43 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:19:48.593 19:53:43 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:48.593 19:53:43 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:48.593 19:53:43 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:48.593 19:53:43 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:48.593 19:53:43 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:48.593 19:53:43 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:48.593 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:48.593 19:53:43 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:48.593 19:53:43 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:48.593 19:53:43 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:48.593 19:53:43 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:19:48.593 19:53:43 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:19:48.593 19:53:43 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:19:48.593 19:53:43 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:19:48.593 19:53:43 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:19:48.593 19:53:43 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:19:48.593 19:53:43 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:19:48.593 19:53:43 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:19:48.593 19:53:43 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:19:48.593 19:53:43 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:19:48.593 19:53:43 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:19:48.593 19:53:43 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:19:48.593 19:53:43 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:19:48.593 19:53:43 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:19:48.593 19:53:43 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:19:48.593 19:53:43 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:48.593 19:53:43 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:19:48.593 19:53:43 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:19:48.593 19:53:43 keyring_linux -- nvmf/common.sh@733 -- # python - 00:19:48.593 19:53:43 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:19:48.593 /tmp/:spdk-test:key0 00:19:48.593 19:53:43 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:19:48.593 19:53:43 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:19:48.593 19:53:43 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:19:48.593 19:53:43 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:19:48.593 19:53:43 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:19:48.593 19:53:43 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:19:48.593 19:53:43 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:19:48.593 19:53:43 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:19:48.593 19:53:43 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:19:48.593 19:53:43 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:19:48.593 19:53:43 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:48.594 19:53:43 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:19:48.594 19:53:43 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:19:48.594 19:53:43 keyring_linux -- nvmf/common.sh@733 -- # python - 00:19:48.594 19:53:43 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:19:48.594 /tmp/:spdk-test:key1 00:19:48.594 19:53:43 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:19:48.594 19:53:43 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:48.594 19:53:43 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=84189 00:19:48.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:48.594 19:53:43 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 84189 00:19:48.594 19:53:43 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 84189 ']' 00:19:48.594 19:53:43 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.594 19:53:43 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:48.594 19:53:43 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.594 19:53:43 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:48.594 19:53:43 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:19:48.594 [2024-11-26 19:53:43.804283] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:19:48.594 [2024-11-26 19:53:43.804571] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84189 ] 00:19:48.853 [2024-11-26 19:53:43.948447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.853 [2024-11-26 19:53:43.978619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:48.853 [2024-11-26 19:53:44.017891] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:49.421 19:53:44 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:49.421 19:53:44 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:19:49.421 19:53:44 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:19:49.421 19:53:44 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.421 19:53:44 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:19:49.421 [2024-11-26 19:53:44.656706] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:49.680 null0 00:19:49.680 [2024-11-26 19:53:44.688680] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:49.680 [2024-11-26 19:53:44.688805] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:19:49.680 19:53:44 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.680 19:53:44 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:19:49.680 370675076 00:19:49.680 19:53:44 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:19:49.680 758092875 00:19:49.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:49.680 19:53:44 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=84207 00:19:49.680 19:53:44 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 84207 /var/tmp/bperf.sock 00:19:49.680 19:53:44 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 84207 ']' 00:19:49.680 19:53:44 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:49.680 19:53:44 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:49.680 19:53:44 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:49.680 19:53:44 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:19:49.680 19:53:44 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:49.680 19:53:44 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:19:49.680 [2024-11-26 19:53:44.750598] Starting SPDK v25.01-pre git sha1 fc308e3c5 / DPDK 24.03.0 initialization... 00:19:49.680 [2024-11-26 19:53:44.750657] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84207 ] 00:19:49.680 [2024-11-26 19:53:44.891268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.939 [2024-11-26 19:53:44.925984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:49.939 19:53:44 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:49.939 19:53:44 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:19:49.939 19:53:44 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:19:49.939 19:53:44 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:19:49.940 19:53:45 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:19:49.940 19:53:45 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:50.198 [2024-11-26 19:53:45.359978] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:50.198 19:53:45 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:19:50.198 19:53:45 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:19:50.457 [2024-11-26 19:53:45.575225] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:50.457 nvme0n1 00:19:50.457 19:53:45 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:19:50.457 19:53:45 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:19:50.457 19:53:45 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:19:50.457 19:53:45 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:19:50.457 19:53:45 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:50.457 19:53:45 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:19:50.715 19:53:45 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:19:50.715 19:53:45 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:19:50.715 19:53:45 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:19:50.715 19:53:45 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:19:50.715 19:53:45 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:19:50.715 19:53:45 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:19:50.715 19:53:45 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:50.973 19:53:46 keyring_linux -- keyring/linux.sh@25 -- # sn=370675076 00:19:50.973 19:53:46 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:19:50.973 19:53:46 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:19:50.973 19:53:46 keyring_linux -- keyring/linux.sh@26 -- # [[ 370675076 == \3\7\0\6\7\5\0\7\6 ]] 00:19:50.973 19:53:46 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 370675076 00:19:50.973 19:53:46 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:19:50.973 19:53:46 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:50.973 Running I/O for 1 seconds... 00:19:51.907 23810.00 IOPS, 93.01 MiB/s 00:19:51.907 Latency(us) 00:19:51.907 [2024-11-26T19:53:47.154Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:51.907 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:19:51.907 nvme0n1 : 1.01 23812.37 93.02 0.00 0.00 5359.10 3755.72 10889.06 00:19:51.907 [2024-11-26T19:53:47.154Z] =================================================================================================================== 00:19:51.907 [2024-11-26T19:53:47.154Z] Total : 23812.37 93.02 0.00 0.00 5359.10 3755.72 10889.06 00:19:51.908 { 00:19:51.908 "results": [ 00:19:51.908 { 00:19:51.908 "job": "nvme0n1", 00:19:51.908 "core_mask": "0x2", 00:19:51.908 "workload": "randread", 00:19:51.908 "status": "finished", 00:19:51.908 "queue_depth": 128, 00:19:51.908 "io_size": 4096, 00:19:51.908 "runtime": 1.005276, 00:19:51.908 "iops": 23812.36595720976, 00:19:51.908 "mibps": 93.01705452035063, 00:19:51.908 "io_failed": 0, 00:19:51.908 "io_timeout": 0, 00:19:51.908 "avg_latency_us": 5359.103345180177, 00:19:51.908 "min_latency_us": 3755.716923076923, 00:19:51.908 "max_latency_us": 10889.058461538461 00:19:51.908 } 00:19:51.908 ], 00:19:51.908 "core_count": 1 00:19:51.908 } 00:19:51.908 19:53:47 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:19:51.908 19:53:47 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:19:52.165 19:53:47 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:19:52.165 19:53:47 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:19:52.165 19:53:47 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:19:52.165 19:53:47 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:19:52.165 19:53:47 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:19:52.165 19:53:47 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:19:52.422 19:53:47 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:19:52.422 19:53:47 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:19:52.422 19:53:47 keyring_linux -- keyring/linux.sh@23 -- # return 00:19:52.422 19:53:47 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:19:52.422 19:53:47 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:19:52.422 19:53:47 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:19:52.422 19:53:47 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:19:52.422 19:53:47 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:52.422 19:53:47 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:19:52.422 19:53:47 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:52.422 19:53:47 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:19:52.422 19:53:47 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:19:52.681 [2024-11-26 19:53:47.799870] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:52.681 [2024-11-26 19:53:47.800030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11bb5d0 (107): Transport endpoint is not connected 00:19:52.681 [2024-11-26 19:53:47.801022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11bb5d0 (9): Bad file descriptor 00:19:52.681 [2024-11-26 19:53:47.802022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:19:52.681 [2024-11-26 19:53:47.802038] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:19:52.681 [2024-11-26 19:53:47.802044] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:19:52.681 [2024-11-26 19:53:47.802049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:19:52.681 request: 00:19:52.681 { 00:19:52.681 "name": "nvme0", 00:19:52.681 "trtype": "tcp", 00:19:52.681 "traddr": "127.0.0.1", 00:19:52.681 "adrfam": "ipv4", 00:19:52.681 "trsvcid": "4420", 00:19:52.681 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:52.681 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:52.681 "prchk_reftag": false, 00:19:52.681 "prchk_guard": false, 00:19:52.681 "hdgst": false, 00:19:52.681 "ddgst": false, 00:19:52.681 "psk": ":spdk-test:key1", 00:19:52.681 "allow_unrecognized_csi": false, 00:19:52.681 "method": "bdev_nvme_attach_controller", 00:19:52.681 "req_id": 1 00:19:52.681 } 00:19:52.681 Got JSON-RPC error response 00:19:52.681 response: 00:19:52.681 { 00:19:52.681 "code": -5, 00:19:52.681 "message": "Input/output error" 00:19:52.681 } 00:19:52.681 19:53:47 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:19:52.681 19:53:47 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:52.681 19:53:47 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:52.681 19:53:47 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:52.681 19:53:47 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:19:52.681 19:53:47 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:19:52.681 19:53:47 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:19:52.681 19:53:47 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:19:52.681 19:53:47 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:19:52.681 19:53:47 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:19:52.681 19:53:47 keyring_linux -- keyring/linux.sh@33 -- # sn=370675076 00:19:52.681 19:53:47 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 370675076 00:19:52.681 1 links removed 00:19:52.681 19:53:47 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:19:52.681 19:53:47 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:19:52.681 19:53:47 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:19:52.681 19:53:47 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:19:52.681 19:53:47 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:19:52.681 19:53:47 keyring_linux -- keyring/linux.sh@33 -- # sn=758092875 00:19:52.681 19:53:47 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 758092875 00:19:52.681 1 links removed 00:19:52.681 19:53:47 keyring_linux -- keyring/linux.sh@41 -- # killprocess 84207 00:19:52.681 19:53:47 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 84207 ']' 00:19:52.681 19:53:47 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 84207 00:19:52.681 19:53:47 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:19:52.681 19:53:47 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:52.681 19:53:47 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84207 00:19:52.681 killing process with pid 84207 00:19:52.681 Received shutdown signal, test time was about 1.000000 seconds 00:19:52.681 00:19:52.681 Latency(us) 00:19:52.681 [2024-11-26T19:53:47.928Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:52.681 [2024-11-26T19:53:47.928Z] =================================================================================================================== 00:19:52.681 [2024-11-26T19:53:47.928Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:52.681 19:53:47 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:52.681 19:53:47 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:52.681 19:53:47 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84207' 00:19:52.681 19:53:47 keyring_linux -- common/autotest_common.sh@973 -- # kill 84207 00:19:52.681 19:53:47 keyring_linux -- common/autotest_common.sh@978 -- # wait 84207 00:19:53.012 19:53:47 keyring_linux -- keyring/linux.sh@42 -- # killprocess 84189 00:19:53.012 19:53:47 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 84189 ']' 00:19:53.012 19:53:47 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 84189 00:19:53.012 19:53:47 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:19:53.012 19:53:47 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:53.012 19:53:47 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84189 00:19:53.012 killing process with pid 84189 00:19:53.012 19:53:47 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:53.012 19:53:47 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:53.012 19:53:47 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84189' 00:19:53.012 19:53:47 keyring_linux -- common/autotest_common.sh@973 -- # kill 84189 00:19:53.012 19:53:47 keyring_linux -- common/autotest_common.sh@978 -- # wait 84189 00:19:53.012 ************************************ 00:19:53.012 END TEST keyring_linux 00:19:53.012 ************************************ 00:19:53.012 00:19:53.012 real 0m4.671s 00:19:53.012 user 0m8.786s 00:19:53.012 sys 0m1.162s 00:19:53.012 19:53:48 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:53.012 19:53:48 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:19:53.012 19:53:48 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:19:53.012 19:53:48 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:19:53.012 19:53:48 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:19:53.012 19:53:48 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:19:53.012 19:53:48 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:19:53.012 19:53:48 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:19:53.012 19:53:48 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:19:53.012 19:53:48 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:19:53.012 19:53:48 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:19:53.012 19:53:48 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:19:53.012 19:53:48 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:19:53.012 19:53:48 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:19:53.012 19:53:48 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:19:53.012 19:53:48 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:19:53.012 19:53:48 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:19:53.012 19:53:48 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:19:53.012 19:53:48 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:19:53.012 19:53:48 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:53.012 19:53:48 -- common/autotest_common.sh@10 -- # set +x 00:19:53.012 19:53:48 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:19:53.012 19:53:48 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:19:53.012 19:53:48 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:19:53.012 19:53:48 -- common/autotest_common.sh@10 -- # set +x 00:19:54.384 INFO: APP EXITING 00:19:54.384 INFO: killing all VMs 00:19:54.384 INFO: killing vhost app 00:19:54.384 INFO: EXIT DONE 00:19:54.948 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:54.948 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:19:54.948 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:19:55.206 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:55.465 Cleaning 00:19:55.465 Removing: /var/run/dpdk/spdk0/config 00:19:55.465 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:19:55.465 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:19:55.465 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:19:55.465 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:19:55.465 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:19:55.465 Removing: /var/run/dpdk/spdk0/hugepage_info 00:19:55.465 Removing: /var/run/dpdk/spdk1/config 00:19:55.465 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:19:55.465 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:19:55.465 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:19:55.465 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:19:55.465 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:19:55.465 Removing: /var/run/dpdk/spdk1/hugepage_info 00:19:55.465 Removing: /var/run/dpdk/spdk2/config 00:19:55.465 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:19:55.465 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:19:55.465 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:19:55.465 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:19:55.465 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:19:55.465 Removing: /var/run/dpdk/spdk2/hugepage_info 00:19:55.465 Removing: /var/run/dpdk/spdk3/config 00:19:55.465 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:19:55.465 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:19:55.465 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:19:55.465 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:19:55.465 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:19:55.465 Removing: /var/run/dpdk/spdk3/hugepage_info 00:19:55.465 Removing: /var/run/dpdk/spdk4/config 00:19:55.465 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:19:55.465 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:19:55.465 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:19:55.465 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:19:55.465 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:19:55.465 Removing: /var/run/dpdk/spdk4/hugepage_info 00:19:55.465 Removing: /dev/shm/nvmf_trace.0 00:19:55.465 Removing: /dev/shm/spdk_tgt_trace.pid56084 00:19:55.465 Removing: /var/run/dpdk/spdk0 00:19:55.465 Removing: /var/run/dpdk/spdk1 00:19:55.465 Removing: /var/run/dpdk/spdk2 00:19:55.465 Removing: /var/run/dpdk/spdk3 00:19:55.465 Removing: /var/run/dpdk/spdk4 00:19:55.465 Removing: /var/run/dpdk/spdk_pid55937 00:19:55.465 Removing: /var/run/dpdk/spdk_pid56084 00:19:55.465 Removing: /var/run/dpdk/spdk_pid56285 00:19:55.465 Removing: /var/run/dpdk/spdk_pid56366 00:19:55.465 Removing: /var/run/dpdk/spdk_pid56393 00:19:55.465 Removing: /var/run/dpdk/spdk_pid56497 00:19:55.465 Removing: /var/run/dpdk/spdk_pid56515 00:19:55.465 Removing: /var/run/dpdk/spdk_pid56648 00:19:55.465 Removing: /var/run/dpdk/spdk_pid56834 00:19:55.465 Removing: /var/run/dpdk/spdk_pid56982 00:19:55.465 Removing: /var/run/dpdk/spdk_pid57055 00:19:55.465 Removing: /var/run/dpdk/spdk_pid57133 00:19:55.465 Removing: /var/run/dpdk/spdk_pid57227 00:19:55.465 Removing: /var/run/dpdk/spdk_pid57301 00:19:55.465 Removing: /var/run/dpdk/spdk_pid57339 00:19:55.465 Removing: /var/run/dpdk/spdk_pid57375 00:19:55.465 Removing: /var/run/dpdk/spdk_pid57439 00:19:55.465 Removing: /var/run/dpdk/spdk_pid57522 00:19:55.465 Removing: /var/run/dpdk/spdk_pid57946 00:19:55.465 Removing: /var/run/dpdk/spdk_pid57987 00:19:55.465 Removing: /var/run/dpdk/spdk_pid58027 00:19:55.465 Removing: /var/run/dpdk/spdk_pid58043 00:19:55.465 Removing: /var/run/dpdk/spdk_pid58099 00:19:55.465 Removing: /var/run/dpdk/spdk_pid58115 00:19:55.465 Removing: /var/run/dpdk/spdk_pid58171 00:19:55.465 Removing: /var/run/dpdk/spdk_pid58186 00:19:55.465 Removing: /var/run/dpdk/spdk_pid58227 00:19:55.465 Removing: /var/run/dpdk/spdk_pid58245 00:19:55.465 Removing: /var/run/dpdk/spdk_pid58280 00:19:55.465 Removing: /var/run/dpdk/spdk_pid58298 00:19:55.465 Removing: /var/run/dpdk/spdk_pid58423 00:19:55.465 Removing: /var/run/dpdk/spdk_pid58458 00:19:55.465 Removing: /var/run/dpdk/spdk_pid58535 00:19:55.465 Removing: /var/run/dpdk/spdk_pid58864 00:19:55.465 Removing: /var/run/dpdk/spdk_pid58880 00:19:55.465 Removing: /var/run/dpdk/spdk_pid58907 00:19:55.465 Removing: /var/run/dpdk/spdk_pid58921 00:19:55.465 Removing: /var/run/dpdk/spdk_pid58937 00:19:55.465 Removing: /var/run/dpdk/spdk_pid58950 00:19:55.465 Removing: /var/run/dpdk/spdk_pid58969 00:19:55.465 Removing: /var/run/dpdk/spdk_pid58979 00:19:55.465 Removing: /var/run/dpdk/spdk_pid58998 00:19:55.465 Removing: /var/run/dpdk/spdk_pid59012 00:19:55.465 Removing: /var/run/dpdk/spdk_pid59027 00:19:55.465 Removing: /var/run/dpdk/spdk_pid59046 00:19:55.465 Removing: /var/run/dpdk/spdk_pid59054 00:19:55.465 Removing: /var/run/dpdk/spdk_pid59070 00:19:55.465 Removing: /var/run/dpdk/spdk_pid59089 00:19:55.465 Removing: /var/run/dpdk/spdk_pid59102 00:19:55.465 Removing: /var/run/dpdk/spdk_pid59118 00:19:55.465 Removing: /var/run/dpdk/spdk_pid59131 00:19:55.465 Removing: /var/run/dpdk/spdk_pid59145 00:19:55.465 Removing: /var/run/dpdk/spdk_pid59160 00:19:55.465 Removing: /var/run/dpdk/spdk_pid59191 00:19:55.465 Removing: /var/run/dpdk/spdk_pid59204 00:19:55.465 Removing: /var/run/dpdk/spdk_pid59234 00:19:55.465 Removing: /var/run/dpdk/spdk_pid59306 00:19:55.465 Removing: /var/run/dpdk/spdk_pid59329 00:19:55.465 Removing: /var/run/dpdk/spdk_pid59338 00:19:55.465 Removing: /var/run/dpdk/spdk_pid59367 00:19:55.465 Removing: /var/run/dpdk/spdk_pid59376 00:19:55.725 Removing: /var/run/dpdk/spdk_pid59384 00:19:55.725 Removing: /var/run/dpdk/spdk_pid59421 00:19:55.725 Removing: /var/run/dpdk/spdk_pid59434 00:19:55.725 Removing: /var/run/dpdk/spdk_pid59463 00:19:55.725 Removing: /var/run/dpdk/spdk_pid59471 00:19:55.725 Removing: /var/run/dpdk/spdk_pid59476 00:19:55.725 Removing: /var/run/dpdk/spdk_pid59486 00:19:55.725 Removing: /var/run/dpdk/spdk_pid59495 00:19:55.725 Removing: /var/run/dpdk/spdk_pid59499 00:19:55.725 Removing: /var/run/dpdk/spdk_pid59509 00:19:55.725 Removing: /var/run/dpdk/spdk_pid59518 00:19:55.725 Removing: /var/run/dpdk/spdk_pid59547 00:19:55.725 Removing: /var/run/dpdk/spdk_pid59568 00:19:55.725 Removing: /var/run/dpdk/spdk_pid59577 00:19:55.725 Removing: /var/run/dpdk/spdk_pid59606 00:19:55.725 Removing: /var/run/dpdk/spdk_pid59610 00:19:55.725 Removing: /var/run/dpdk/spdk_pid59617 00:19:55.725 Removing: /var/run/dpdk/spdk_pid59658 00:19:55.725 Removing: /var/run/dpdk/spdk_pid59669 00:19:55.725 Removing: /var/run/dpdk/spdk_pid59696 00:19:55.725 Removing: /var/run/dpdk/spdk_pid59698 00:19:55.725 Removing: /var/run/dpdk/spdk_pid59705 00:19:55.725 Removing: /var/run/dpdk/spdk_pid59713 00:19:55.725 Removing: /var/run/dpdk/spdk_pid59720 00:19:55.725 Removing: /var/run/dpdk/spdk_pid59728 00:19:55.725 Removing: /var/run/dpdk/spdk_pid59730 00:19:55.725 Removing: /var/run/dpdk/spdk_pid59737 00:19:55.725 Removing: /var/run/dpdk/spdk_pid59814 00:19:55.725 Removing: /var/run/dpdk/spdk_pid59856 00:19:55.725 Removing: /var/run/dpdk/spdk_pid59974 00:19:55.725 Removing: /var/run/dpdk/spdk_pid60002 00:19:55.725 Removing: /var/run/dpdk/spdk_pid60037 00:19:55.725 Removing: /var/run/dpdk/spdk_pid60057 00:19:55.725 Removing: /var/run/dpdk/spdk_pid60073 00:19:55.725 Removing: /var/run/dpdk/spdk_pid60088 00:19:55.725 Removing: /var/run/dpdk/spdk_pid60125 00:19:55.725 Removing: /var/run/dpdk/spdk_pid60135 00:19:55.725 Removing: /var/run/dpdk/spdk_pid60213 00:19:55.725 Removing: /var/run/dpdk/spdk_pid60229 00:19:55.725 Removing: /var/run/dpdk/spdk_pid60262 00:19:55.725 Removing: /var/run/dpdk/spdk_pid60322 00:19:55.725 Removing: /var/run/dpdk/spdk_pid60361 00:19:55.725 Removing: /var/run/dpdk/spdk_pid60389 00:19:55.725 Removing: /var/run/dpdk/spdk_pid60484 00:19:55.725 Removing: /var/run/dpdk/spdk_pid60521 00:19:55.725 Removing: /var/run/dpdk/spdk_pid60559 00:19:55.725 Removing: /var/run/dpdk/spdk_pid60780 00:19:55.725 Removing: /var/run/dpdk/spdk_pid60872 00:19:55.725 Removing: /var/run/dpdk/spdk_pid60901 00:19:55.725 Removing: /var/run/dpdk/spdk_pid60930 00:19:55.725 Removing: /var/run/dpdk/spdk_pid60958 00:19:55.725 Removing: /var/run/dpdk/spdk_pid60992 00:19:55.725 Removing: /var/run/dpdk/spdk_pid61025 00:19:55.725 Removing: /var/run/dpdk/spdk_pid61057 00:19:55.725 Removing: /var/run/dpdk/spdk_pid61439 00:19:55.725 Removing: /var/run/dpdk/spdk_pid61477 00:19:55.725 Removing: /var/run/dpdk/spdk_pid61807 00:19:55.725 Removing: /var/run/dpdk/spdk_pid62259 00:19:55.725 Removing: /var/run/dpdk/spdk_pid62521 00:19:55.725 Removing: /var/run/dpdk/spdk_pid63383 00:19:55.725 Removing: /var/run/dpdk/spdk_pid64298 00:19:55.725 Removing: /var/run/dpdk/spdk_pid64421 00:19:55.725 Removing: /var/run/dpdk/spdk_pid64483 00:19:55.725 Removing: /var/run/dpdk/spdk_pid65894 00:19:55.725 Removing: /var/run/dpdk/spdk_pid66198 00:19:55.725 Removing: /var/run/dpdk/spdk_pid69564 00:19:55.725 Removing: /var/run/dpdk/spdk_pid69913 00:19:55.725 Removing: /var/run/dpdk/spdk_pid70022 00:19:55.725 Removing: /var/run/dpdk/spdk_pid70160 00:19:55.725 Removing: /var/run/dpdk/spdk_pid70193 00:19:55.725 Removing: /var/run/dpdk/spdk_pid70218 00:19:55.725 Removing: /var/run/dpdk/spdk_pid70247 00:19:55.725 Removing: /var/run/dpdk/spdk_pid70336 00:19:55.725 Removing: /var/run/dpdk/spdk_pid70472 00:19:55.725 Removing: /var/run/dpdk/spdk_pid70619 00:19:55.725 Removing: /var/run/dpdk/spdk_pid70695 00:19:55.725 Removing: /var/run/dpdk/spdk_pid70890 00:19:55.725 Removing: /var/run/dpdk/spdk_pid70962 00:19:55.725 Removing: /var/run/dpdk/spdk_pid71049 00:19:55.725 Removing: /var/run/dpdk/spdk_pid71401 00:19:55.725 Removing: /var/run/dpdk/spdk_pid71822 00:19:55.725 Removing: /var/run/dpdk/spdk_pid71823 00:19:55.725 Removing: /var/run/dpdk/spdk_pid71824 00:19:55.725 Removing: /var/run/dpdk/spdk_pid72087 00:19:55.725 Removing: /var/run/dpdk/spdk_pid72350 00:19:55.725 Removing: /var/run/dpdk/spdk_pid72726 00:19:55.725 Removing: /var/run/dpdk/spdk_pid72728 00:19:55.725 Removing: /var/run/dpdk/spdk_pid73052 00:19:55.725 Removing: /var/run/dpdk/spdk_pid73071 00:19:55.725 Removing: /var/run/dpdk/spdk_pid73091 00:19:55.725 Removing: /var/run/dpdk/spdk_pid73116 00:19:55.725 Removing: /var/run/dpdk/spdk_pid73121 00:19:55.725 Removing: /var/run/dpdk/spdk_pid73486 00:19:55.725 Removing: /var/run/dpdk/spdk_pid73529 00:19:55.725 Removing: /var/run/dpdk/spdk_pid73854 00:19:55.725 Removing: /var/run/dpdk/spdk_pid74055 00:19:55.725 Removing: /var/run/dpdk/spdk_pid74482 00:19:55.725 Removing: /var/run/dpdk/spdk_pid75028 00:19:55.725 Removing: /var/run/dpdk/spdk_pid75847 00:19:55.725 Removing: /var/run/dpdk/spdk_pid76487 00:19:55.725 Removing: /var/run/dpdk/spdk_pid76489 00:19:55.725 Removing: /var/run/dpdk/spdk_pid78541 00:19:55.725 Removing: /var/run/dpdk/spdk_pid78596 00:19:55.725 Removing: /var/run/dpdk/spdk_pid78655 00:19:55.725 Removing: /var/run/dpdk/spdk_pid78706 00:19:55.725 Removing: /var/run/dpdk/spdk_pid78822 00:19:55.725 Removing: /var/run/dpdk/spdk_pid78877 00:19:55.725 Removing: /var/run/dpdk/spdk_pid78937 00:19:55.725 Removing: /var/run/dpdk/spdk_pid78986 00:19:55.725 Removing: /var/run/dpdk/spdk_pid79349 00:19:55.725 Removing: /var/run/dpdk/spdk_pid80573 00:19:55.725 Removing: /var/run/dpdk/spdk_pid80719 00:19:55.725 Removing: /var/run/dpdk/spdk_pid80961 00:19:55.725 Removing: /var/run/dpdk/spdk_pid81558 00:19:55.725 Removing: /var/run/dpdk/spdk_pid81723 00:19:55.725 Removing: /var/run/dpdk/spdk_pid81885 00:19:55.725 Removing: /var/run/dpdk/spdk_pid81977 00:19:55.725 Removing: /var/run/dpdk/spdk_pid82147 00:19:55.725 Removing: /var/run/dpdk/spdk_pid82256 00:19:55.725 Removing: /var/run/dpdk/spdk_pid82968 00:19:55.725 Removing: /var/run/dpdk/spdk_pid83003 00:19:55.725 Removing: /var/run/dpdk/spdk_pid83037 00:19:55.983 Removing: /var/run/dpdk/spdk_pid83283 00:19:55.983 Removing: /var/run/dpdk/spdk_pid83318 00:19:55.983 Removing: /var/run/dpdk/spdk_pid83359 00:19:55.983 Removing: /var/run/dpdk/spdk_pid83826 00:19:55.983 Removing: /var/run/dpdk/spdk_pid83839 00:19:55.983 Removing: /var/run/dpdk/spdk_pid84073 00:19:55.983 Removing: /var/run/dpdk/spdk_pid84189 00:19:55.983 Removing: /var/run/dpdk/spdk_pid84207 00:19:55.983 Clean 00:19:55.983 19:53:51 -- common/autotest_common.sh@1453 -- # return 0 00:19:55.983 19:53:51 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:19:55.983 19:53:51 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:55.983 19:53:51 -- common/autotest_common.sh@10 -- # set +x 00:19:55.983 19:53:51 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:19:55.983 19:53:51 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:55.983 19:53:51 -- common/autotest_common.sh@10 -- # set +x 00:19:55.983 19:53:51 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:19:55.983 19:53:51 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:19:55.983 19:53:51 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:19:55.983 19:53:51 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:19:55.983 19:53:51 -- spdk/autotest.sh@398 -- # hostname 00:19:55.983 19:53:51 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:19:56.242 geninfo: WARNING: invalid characters removed from testname! 00:20:22.855 19:54:13 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:22.855 19:54:16 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:23.786 19:54:18 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:26.308 19:54:21 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:28.283 19:54:23 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:30.811 19:54:25 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:32.709 19:54:27 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:20:32.709 19:54:27 -- spdk/autorun.sh@1 -- $ timing_finish 00:20:32.709 19:54:27 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:20:32.709 19:54:27 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:20:32.710 19:54:27 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:20:32.710 19:54:27 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:32.710 + [[ -n 4993 ]] 00:20:32.710 + sudo kill 4993 00:20:32.718 [Pipeline] } 00:20:32.734 [Pipeline] // timeout 00:20:32.740 [Pipeline] } 00:20:32.756 [Pipeline] // stage 00:20:32.762 [Pipeline] } 00:20:32.778 [Pipeline] // catchError 00:20:32.788 [Pipeline] stage 00:20:32.790 [Pipeline] { (Stop VM) 00:20:32.803 [Pipeline] sh 00:20:33.081 + vagrant halt 00:20:35.610 ==> default: Halting domain... 00:20:38.900 [Pipeline] sh 00:20:39.177 + vagrant destroy -f 00:20:41.810 ==> default: Removing domain... 00:20:41.822 [Pipeline] sh 00:20:42.099 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:20:42.107 [Pipeline] } 00:20:42.123 [Pipeline] // stage 00:20:42.129 [Pipeline] } 00:20:42.146 [Pipeline] // dir 00:20:42.152 [Pipeline] } 00:20:42.169 [Pipeline] // wrap 00:20:42.175 [Pipeline] } 00:20:42.191 [Pipeline] // catchError 00:20:42.201 [Pipeline] stage 00:20:42.204 [Pipeline] { (Epilogue) 00:20:42.221 [Pipeline] sh 00:20:42.501 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:20:47.791 [Pipeline] catchError 00:20:47.793 [Pipeline] { 00:20:47.809 [Pipeline] sh 00:20:48.142 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:20:48.142 Artifacts sizes are good 00:20:48.150 [Pipeline] } 00:20:48.165 [Pipeline] // catchError 00:20:48.177 [Pipeline] archiveArtifacts 00:20:48.183 Archiving artifacts 00:20:48.309 [Pipeline] cleanWs 00:20:48.319 [WS-CLEANUP] Deleting project workspace... 00:20:48.319 [WS-CLEANUP] Deferred wipeout is used... 00:20:48.325 [WS-CLEANUP] done 00:20:48.327 [Pipeline] } 00:20:48.343 [Pipeline] // stage 00:20:48.348 [Pipeline] } 00:20:48.363 [Pipeline] // node 00:20:48.370 [Pipeline] End of Pipeline 00:20:48.406 Finished: SUCCESS